var/home/core/zuul-output/0000755000175000017500000000000015136672452014540 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136701722015475 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000252667415136701540020276 0ustar corecore`{ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD  ?KEڤ펯_ˎ6Ϸ7+%f?長ox[o8W5]% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FER|gv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!f;嶑, }t&&\5u17\I@ 5O? ʴ(aPqPϟ' I,($F{ձ7*Oy 6EK( EF #31J8mN .TTF9㕴/5~RxCe,&v3,JE- ZF5%Da,Gܠ*qI@qlG6s푻jÝ$ >8ȕ$eZ1j[h0SH,qf<"${/ksBK}xnwDb%M6:K<~̓9*u᛹Q{FЖt~6S#G1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YLz_s߭AF'is^_&uUm$[[5HI4QCZ5!N&D[uiXk&2Bg&Ս7_/6v_cd쿽d@eU XyX2z>g8:.⺻h()&nO5YE\1t7aSyFxPV19 ĕi%K"IcB j>Pm[E[^oHmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PV?Yg"ź)\3mf|ܔMUiU|Ym! #'ukMmQ9Blm]TO1ba.XW x6ܠ9[v35H;-]Um4mMrW-k#~fؤϋu_j*^Wj^qM `-Pk.@%=X#|ۡb1lKcj$׋bKv[~"N jS4HOkeF3LPyi︅iWk! cAnxu6<7cp?WN $?X3l(?  'Z! ,Z.maO_Bk/m~ޖ(<qRfR"Au\PmLZ"twpuJ` mvf+T!6Ѓjw1ncuwo':o gSPC=]U҅yY9 &K<-na'Xk,P4+`Þ/lX/bjFO.= w ?>ȑ3n߿z,t s5Z/ Clo-` z?a~b mzkC zFȏ>1k*Dls6vP9hS  ehC.3 @6ijvUuBY hBnb[ Fr#D7ćlA!:X lYE>#0JvʈɌ|\u,'Y˲.,;oOwoj-25Hݻ7 li0bSlbw=IsxhRbd+I]Y]JP}@.供SЃ??w w@KvKts[TSa /ZaDžPAEư07>~w3n:U/.P珀Yaٳ5Ʈ]խ4 ~fh.8C>n@T%W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"Icƀsu0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,* ] O;*X]Eg,5,ouZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaaQÝ$_vyz4}0!yܒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+bQ̘TT!kw/nb͵FcRG0xeO sw5TV12R7<OG5cjShGg/5TbW > ]~Wޠ9dNiee$V[\[Qp-&u~a+3~;xUFFW>'ǣC~방u)т48ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$sޕ6ql?N/e1N2i)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-=^YgpUHެbZ!y!ul@ڼ63" ۩:6=TZõ$E,ϓRV|G&$rr;J TtIHFE=RȬ]P pLm|?$%>Eü%mWO[>Xmw,*9.[G n >X8Ī;xW%dT:`ٓ~:QO,}j6j!yڦʲT:Pqҋh] H+&=>g| Z;D8ܶb:! Å{2:+au 6:!fF+0#+̬NY"!6a7#񕪰%:r|o5Znڧs?si/W qEU馥˟^_޶oڷOj'?nc]Rn\t3^邳塨Lɏ"8k8M~?M}OAH$77f|lgn I;.K*!<+"eK5c&`X:#;@B@[(K44sBFu M.MNWLlY]K᜴=/ VމYlϿ4i36$>m|_>9|dUA"{!$jKx E$K3hN(tÊ-#v#O N, 9g80Ǭ&VdӞ5W1!1KYd`,-*&>F~⯰&jb.~cNk BL_OG]Bv.A|'qT(Ol.' 4IE|@Iі)<-p JkQm1 `qacܗVc?)cl*&<}P媠E{-sVU>߇GUt\+n3X]Byoz)li$2cPs6D>TEYʿlP1ÒJG9TV\/B{MӨ&Ę4.s"x| 6گ4n;[4E8#yrH9٧=v֍8. ZKߴ/IJ>&IL6i}^^XpCŋ݃k-$pxbڲ&6*9mg>{rtD)wQ`pkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% ?K~,#/0'NZ׽Kq^ėSJ6#j8GO[ PCbʍN^XS&}E9OZ]'t$=tnn&nu [}Ab4 +OLuU{0fIb { Ov8FHӜ"D$aǽO8'1lfYuB!!=?8[Y|-ɬeǪzd;-s~CM>e:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fITU*q%bq@/5q0);F74$*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk]dE6c>1ƕ (0W4Q>@>lWN"^ X5G-nm.8B>NOI[31,j2 Ce |M>8l WIf|\q4|UkC.gr`˱Lϰ} xr.~l-ɩu_Drd31V_ѺUibap5k_JA+A.A~ C~`[KaQ-Ģn9ѧf q:cT >toY X]j?-ȇlCf0hM`~ ó}0W@o  K[{d+`ze"l |d;L2k%x90ݙ^Oe ]nHfS+.4<#/5߁ݛǪ0q,7FeV/!; 瓠 Li% z}ɯww"O-]J`sdN$@"J`Y13K/9`VTElsX|D^c%֯T][$m;ԝ!,Z5f`XFzȁ=nrSA8; P=uY}r/27OUa%~0;үM3Tu ȩ*'3IC~LG,?.?C3tBYpm_g.~>3ʄ55[c&-Wgy_jVo,?s w*n\7[cpMY<~/"˘oV܉T6nn \_ߋV_}Z=k-nn sn.*upw pX\_ U-?ˍ[~?>+8nY_tgqI [GۿsH\ow!>66}եl?|i [%۾s& Z&el-ɬeb.E)բA l1O,dE~ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI$.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N80\+ xJpΕ`p~mg˗%F Rg(6=/r+%a>w Ohght uЍaRs ^d6GXAf?V_mW puȇ S:tŴvŀU#-*mZ5k5r)_x*8ͼx@(k:_TX%[paRu~}#Ѥr %A%`;MxB[CzR怕#H% }8@*AM.SEhd,rKrʇ)br\+! s1CtӒNc_:F*`Nv;o +4 {e6J69@28MZXc Ub+A_Aܲ'SoO1ۀS`*f'r[8ݝYvjҩJ;}]|Bޙǖߔ 3\ a-`slԵ怕e7ːزoW|A\Qu&'9~ l|`pΕ [Q =r#vQu0 M.1%]vRato Px"3Pc /' PW*3GX liWv-6W&)cX |]O;C%8@*Z1%8Gk@5^NtY"Fbi8D'+_1&1 787,_]nl-ߑ?@*la:pN`Qibt r RveBt1NPsToml!:WѥLf21_:h/NAC;?_ξqĎ6xMY(=ͯl~l8V0٨T zL{Ac:&MXA6L7 8UgKdT)*7>p{Pgi-b)>U6IXabPde Ӽ8Ģ8GɄnb'G ֤Mcv4?~Y4sopp|v1f2+mY/%x;M9' hMk^c?U$%g*&VMî_u;9 IVAP& Q4.*γ'Jd j$MQbJ_i3?V5(D1.<"*D)øYe\]icS 1VEd"ɯGk=hߔUnnMegYr< <$*L͂&B3?Z5S'Gk)X"jw;V3X*(6?XUẙ Z:דA~96Xϣ AO= 8(G3E5EZS?^,We?"'9Q_UBwtJʰ𔔍))ar@e m|q2@,G+"˓5 ;;y_WcH;UGhcAh ] ]ЎD~Ik ƥyYϘsi$_ؠaYoeruG6c?jĈaM0lFuOa?I7q_uu\27WLIӒH'ogD;M]\.!>8aפiaK=9qSc5ဳQ#yDQ3Y fvyT4WAYZPx0EMc ]oiygy?B?i2y^[wd0޾1>YkxS$DSkT ?8[H-#d1[^w-ok滣 ohT&sI]_碩Y,YzFf,>gfEC6Dw%<T""YPŭ^^&*|1|w2; Ŷw.Ⲇ֊2.J双}fD"bY0}fHnZa3ҋ75y"IfS? :CqDQ-[-;?d"Jak28kAfe)GT-|+܃e-Bۮ_1hڊ5S,,y"&,)Qߋ)(V9yezI>šT (c_:tr*dQEvvGwP~EE_P5=S1 -tr\>`7^X,IGh g 8z>#} Ddc~G )s'?tGi_&{O)ApK"~!F,M)/E,*_wEB5m={su{]$j8hYI<ʪxO١L!x0o$K"Gy1:'P/TSGm/w hpΛAZߝ$!a2{>OiP< _~==,c&[XeT$@^uZdqi&Bު8"2GNĶ$2<:A0F䏼l~ gi l574,Z|~?΅N_3HP9[-Q^k04?8c5M*j)`$Bf5M4H.UuOq@ch8ڵ   ~CyWeSƮIXl}_ȄߥHB4`yI7+ !< JaT8bxt[a !4h`_K\f4y-H8}xަ^ۭ,!^m/>D(,@3Z:<oMSn@O1Τ=Bru%] qMx؈M}b~ۆI&@<'mwAwҡ%&!Ā Ojʀd McKߠOR> ,sl# )7i?-^Q}UѵjݞrںI(lB*KoIb>$%TwX :PРu?^II{,}x,#Oio ;muh;Bc91M܈h OaF7!Ui[y5v =L!i:eAȭ-UH݄V)l%gmA 3|Cb9Ύ jOXXS7AY"%GI?a=Ӂ/*/on8k?nOyƦp\*u "]\<37|Dj۳#"MEꍵۛ2ʪaШE{͏g^[3HęjZǾRL 2b2Cmmk8! #5S "v!>gE,ajoтU{*o<@ɖ3ل<| X<17$䗷D&t=ߤiT]ʇZ*`1i::q'xy{R 0]k=,ݕj_~C+(0k5D̘dT Sm$ݭ T溩^Q m-oЈz(VA%\NWm6:'ySʚA#̬m[ " 2{5Ě1׹'/s).!K] w:i*-%Tܜ.;5 "Q. E'rzxdc_KvuOWg:rD-ͥ5$b@ݜnS뮥AUVX *;qH .$"qgr};FnƳ޲FA$=0f [3@ r@!2.T44Ҫ#wHWxk 2p}7D=rSar.iK9cؘnD1iB'2Ud_]g b5HL훺B[~ݴ > |nN[I՝WѪ&n=#Ynn@~"CU+{sy=T"t"O'ʀ4ۊ0ʸJ#_9ҒD ky~&Jk[V4D/ +}m yak M7'޻;QtRx%\7Y)\]g&V$wpp5 #.;^ x]U_5O P.La7q:I~gX;2 =d&p0frao֑8;7˖{ |#hx+Hp#;1s7`$.eƠ#QC?e2Yx(j˂1]ߩ ~щz^շ}oqpF7`άg4uw`g9Zqw`g9( }Avb)g3B&&q읙S.nl)s;6sډqh 3mMcF7(gG&snovdۀsR{A֜; S MIӘ?OI]sG `Y6m=cg>w0ںj$0L -C !焊#qG9ű}dE G'8-n1;A X ~= _nNo1X0x?{W۸J vYRw&8#F3~>2A͌DjI߷Z36WWu]/N-?OoWO=YV>/6LE*zB$tUQ݅O::ƤӤz&DpV>q]9~N)bU1u*C.ߒ!~5%`*{r 'rKYg),ջ'o->͊tԕ㰬bS.2KbTM*'^|YaoFt kml}5< ??7~:<g91C n"^X,ZMtV@:@PmFĨ}E:aA(W__#pxݸlўH_k1H-6Gήe+^{  BP+pxqFA8 fؕbw3i4mZOu ǺaX.9!`mBLJ;N"_٠+@t qމ $Vȡt"hx%qB0v7Lۑ-㘯d ڄR"n3h }X:혷z"@g c7Fd۬]uݎx nrf Nf c~0"z\t#bqXG"jI_3@u"h˴|H4 [(N8{S_C;cv*`3(44[B8"*-N-ewжru2^!.)7 WclY pM{ 1WcGl]aZv7* q?9@n[9< ?>DIah"7P Y4XкXa?*f joXQ;`~٢6a;_d7!ۢlRƳ&; po;}Y .d2 2ί JxhG1[ "#rH! ɑFLkSusgn8ewQ2˔B5QHGIfJ! FUI3n8 ǵM[u4,;lQFC:q"=ˉyohN#' vpVs`pǬmݣT( Jܟ\ou88!C6.lY)l;rYSܼ"HH⾇_"ߧ9,Ѡ~%ȋIOF&QIpSJ[Bj^'2vkZVװZVU"ZSW+Zl[l)xҀI5Ppkiެb2jY|r^q;Be9r鋵6c{ aL9<$Mp06 8i'7%c3q;qpdpTuL]#\b!1x#n^%z\UsI>z|QCSR`:}7[}~{}ߡQ1#urG)Q˳]G%A[oW Ay%KȲFzdNFID6`~8\ [[=lPGm{#<=}y?WVsn!΋Oe]pF""-!B4tF6-~p=4( σa(9֨_-|Yu&Ҥmȣ*{|pO\LbyR95.JڮS-Au3FF27)I$^j[YOZlEM0`_#fdCa<>:}7xo0p{FPTVF/pD0ý z-F$W .Og}sSJQmShâoR|<e`Ƹ 8M#,h|xmux"Jp*1cQrFB4C_ LoIRRWMSx7tqoaUpn* s"O :jZ=ux0AcZDR>d!*G=:\w#S,0mt^ G|X`AÈ,X=yjdb~í-{`P2uLւ)T8l S1@LaÄY9դm**ӻկH 35ʴݛ5V0|~q薟ROVR\xB).Zx+!jL``m%{w% Q*"zmT?? gey!}O()!|_ ],^Ls-D'nHjjBmH٨LBTƓQja8"-TF⮍EFN©$94-r4e%$lKhp6ospߨh-hUMM? U\zPU7D IXT>Zjq4L}XaTnn yrL5Pz[/@p[ |lJ!y\;LI2,xxv$pwrPYRPWqa&*:]u;%+L}ctǽJ[ iit}MCЩyp["ߝ&{EUh3*S :7_BIe׸ZmwkPs8tǘo@(ġhF v`q+>6b\e _qW;^>?gC8b2WXO#pI?:dE_IM>8<-o%Q&,:ӫZrtgT fzzw7deZe}TzOY[Z5-$2 EcLT?)74Q)|slEʣS4-Ft@nXK/ =<7"KzI %f>N)j<#ΊK_HO*M?̄7:~I'xDFp6M(Ru:EE+UCg [u(-Ʋ@8MF a4Ԁ5k0/T*N'p'& oe]'̓ƺw&Z SdgxW0ŧ@AE]k3`a:!lOD }'y׶ֱ\gw9Mkr6`Ie# v [T?o&H;[CHc,kW$shj8Ă`SvYr`Qt o5M)maX??h"0BuKz)el[=ggpدyr*tKx- Tid^e:8 Rؠ4q*J, $A XӪpM"i!؎̆ssM"Lt rÛ̺ҺǏE_)!Oد0XXU!EApI@M:[z#Itmр8z4G؏VOd$ X{V`tiCֽl=iIt5Y\F+UzR]\pዷh4xQ͍)A] \ "楶 ò`go.m¢7L;]BIXj;9 O Y@걃٦f-ha(eI u0.kh"26}q}sjǟmގۡe\"m?x5t1Iap]K-5}g4=Σ9\!.G_UvKm"hMU#FU C \s=DՋAzi} [Oу/N#0Aj\)3Wj! eeڄdhk.a$'w ::ej5PF&lTi"Eҙ&j>|1Y8C-R=s8zGBOҕ#Eo\ǾT/RZQ=\CGđ" .X\^6glXmOhkUv3R$y>KyK 1Q *@hʮoGȒ";JY-\0fY dWh~,89rn*@#5&i(z!I%aPk2PD'9ٴ$[ΪJ"83׿v+ڑjCζ@MBs{rXQ͇;3IД,ͻ/:-A ޽bTW{80 X>Ffc<ɛ䐙g&3¤!kI<_hM(qsW}/PXP*On@{A6afa~5znbkZFڔRHJE ú̵-IKol`&A% Z:+)jCa1u|MwRs1Kii t/ ,XF_^M*'a)m˅SB朂HHH`+cUF8>B5$Q DZAcr Ker',1f&SHIBCK52hJD|_҄hNVi;lrr;kFBj|4(R^YW f8/ l Nq8<'jrR[$) ֑_ [|"x|=b c[J^ ELS$A٬ 6iu vn.!10 ϡB4MH:w,8Nn449\BMn39s!7>jFԲr2,=S$bWJ ,8NLIB߾#te#*N!E l|j|˾68-UF@ը(!j(1>f'N_:zI|^=1Y쇵{ )WUM`-i[;d>WÂdq@G$ȃv @7ۣ)Z-̪5HV։5A qZ\U(r IqwJy瞞KVw)8N֐=FXfWIpH'N>%+5iD(Ux2$ߺ2ufqr8Vc*JEèZ! @ZD)SİoUpQzxIĂ&^˜MngG Λ0|)PƶE'NS/ǁɞ &0L@ =)+̮lV"nbug_Z9^jSoٮ0"|i{@"cxgh1З&R/xp\Ho qrwBB"%14L*{9$GISİQ^WJpRomLJG~ն)/?ѨD=7S"j]edǴ.AspIե,IS$mRt&aJ;5,XF;_5'?֠ވ}2'MI 8{\iE_+eZ{UȗOHJ'+fc L.%{W(zY('χ]Z_f 5:uப8lYǡݷyBy. v.?bKjgq_:[(Oo/G_cgѼk>ȷ+!/PB7EdԮ' zz/Jh=mb& :Ͱo3c*2DZag v*>K!hZ>hƀ1 NBF+:f: !Eҙ?,u*tI3R#n6Bi{w8|iI I7Jhoۊ,3z\M6'/H(g}7ֲ8}e$,}p?`{}bYwzU_M yWRI :W ҙkxcf' 44MAl1x.KR;j֛r O H : ً1E'&ܖu [C%@9I)oo 7MDizlnLy)b(kW@e]nrRE!-P>wܚI3Hh͒0;߭ԨFRag]j. \kz_?qBLfp̟r9@\#ţm&y8nr&2- {/]p:Ĥ}붃O<*ǂQbM2c:׿dg 8Imh{0JfP?J@BRH:OZ@kn$HRfDuxtdoN@ڂ0R{R'Fμs1Pp.(BZtJF9wɹ2 >&Onk_WUQٹ ^G|S:=WRP:eUE娓F{(> ɔK|ĩ$~ޯO{ςM,_W5F [% җI:}I>Ypt⪺;s(Td !Sֵa)Ǯ* UԧB@Þ֓^J7;#& tpɠ]{ˉqh6V]xR$8fr pz6]ɁAtLJwI^#j5ɂ>n'B gmKsKi%%Ӷ WpѳJIg1'A_11 E{!/r,+ .:5ȳ)a*ɳrN-s-BQR= i-vm)Y|QϸrO.5!|T\nX¥ɂ0^o|kMeb3^2J _&z bt3g7y:7zB4B`2hKMFmj otLmᬸ*6 +fψ\Ș 렐fmȳ<ŋPxrӉ5=1 /+=zhq X L٨F91my2Ҷp6Ӗ='w:>̂;2ZW{H:'su,8*#Lr^{ >Q ^jN$^-vzDp})IH 3v"~59m}t̞#_URP69U=u |maqrN{FCF 9.!pKbkuJd +0s)>Ԉ=Ig:CU,8pUUEo&v}3=~d3J:ɡeTФӔ,ͻG"'O RnHJĘz-O\?Bt>zeq2ޥGƄq RUMK MC J\Uw‘*Ɨ΂839YGXKBIyoҦ,39>~$yz/P9pq*e4HHUam'3 ܲ1kkUAPXɳ8FyΟZ"(/cz B^YQ;X. 8[,]钞\Ŝ1rkPnaz''!.^x(킷X$Xn{|RhC;n?O9B__MN/`$s;I`\tmg5^ /Y ah%=Us҄fWFQH*@kh?ۍk>LwfNh`=/aF:# ]% =, txu ŷm'NUYw*T eeu҅2|pJ闖B)qf)ySg$3sdŔ8 B .Fo,\}Xٷ'ۺ\#7ɑ+.W`u|E(d@̫ź7W,Z;(aK~^ۡ ]Yq6 MuWfwg'Cv/c:9o.[H!Yۅ?LsLOZҝqeqX- "se_; ar50Vu!vо^9.L*K? 7ʊvee%D;E~i@3˟*bkq%[WP-t1~|e'oK@(|0 ̏yYr-uO/#Hmi'g .'}{yljLo'Df@p9kY ̀Z^:B)_*C#b`)w%nRkn7Ǜ <7RC ~(U! -R|}UP@h%.Uԡ|ws!UV%jPB%UHmQP-UAi\!=GXC"-a kP"ўIX VHײM PApV>cb|Έ4nAZEQQEajE4V2vPVGP)o45GvGǸ`~x: EW rrq5XAkM2pfg6%A2u%RgqQK%}#HLuo (nOO~WSf*b-{380 `2[.x8i*^UgˊOQdm' G$_2*'.dzqG:͈ZrK3j#L&>c،/&۷/)㺖C˕"},1ٖp g-ҷN1fZv,fX)oṴslyj}U"#ؼ 0eʼn0kw·._u)ZǓ6IF~,wgy"͆4% `*[gۊ;*xpz'8a;snp]iFpQ_@}Ϋ,;y 5JfC`u>]'(pS,)Y]ˑV($ci`ٕx5ݘrDJ1 Q\Dר^srA %oI~Ƹ& HIÂHՖ!=Mymk )Nf*&9$28ٶ9+k ya_u!ʧoYSO<ĽbzhxydZ Q[3ZS%-eZQ(e )7 !0 {8* yW_gSj+Hp^мqGrK|AL7[Lbaiz0 r Zd~JFϱ|sxuS2@a)/a5010%o߳l]Cyy.$ue AR#lCpjF$?֙"!oRG FbYVZn5$CRfߒD,N,d6Hsf9ęO3V&J |)dNƒQZQKEs3W3,P*xqo} 7.'t&tb } 1\($yt3}LuR$;7aqi,Mggvpzg!~=9J>_C o * =cK%9w:7S |x:fs ǘH?t|T+`˕/%?EMeLR-%[G%~T@}_r%&D@*4 SoFBsE˽Nʀcg2^!E!ER1MJMJc1KHOQQ܂y_dqp-V'Noqd'?^aۊ꤭TRSL_oΞQ'P %,B5D5vIM@XZw5u gF VLyyabS0cmSl eXZE պfP ׁJ&\?iqN%^SJ ҈fHR,wp+9`Zhbɖ@nd8- @{ҭZ4<=">&= UH FD4߮Z(*҄(Ԣ&L1nB3[ kBeBg.ke#%Ck *1cZgT+iR$`ѭ US,wR#),?DR@1*]j J ,!cgevKQZ[5 Q%: m kᚶȀuڬ NeygYgZf`{mIiPc򫺭6but&YE-Pkt[ W6jbLH}WWRfpF,a%B1kknTS1LRRXnTJodmhTu#UuԘ@U Z{>wo],w9#FZ>MUA+sׇ UCYJBB"!$JD7TI-K|.k$X^đ RaC1Ns7R\HgΛ$vTr𔾊|>:R8rZ$E 8cxً-N{Osh&^\l{_?9ndݧGY/̨׋vaFSw=hs3}O ތ9=@܅~zIwTA_z1N@⩣"lsᤷ(_eÈ$ZBӿ%FH0tB#/+=|ɵ6_I0%U\ۄCxʹJbZUX@[F)8jlVZմqYԡmr˸1SLW(I<@ Z>YN ta q֌m%ة @cn@*T6@+% J!Ijn41&~S[cYuZr%ugŦ!0FC`x;Vap=E溈+.Wݧ9_F?:9,CWQj|E4B ;$e$L_t*%bz'V:_q*K3'Sߏ↑a<^fΊX< Bf}O2z>ИNV&[uVS ɁA"XKb $˴.^S,M`5s*y" oG*eCcodG/Ŀ:Q!ы stθQa@8 'j0ӆaI $ Ft6Dkǚ aXܮpKQRm(:P);)ezeZ{e 3QUHdaYWC`> \uFm.őE.S q T,Q _+30m0DߚnE{4fW~vG*rI2NU.2gR 宅BzZ.;.2eM"J8'\ Iy"a` |8ڞ:vIxHHSԲARrĆΐ;߬Q[J& :./n!^tqr:}Wokpr6Ș8$z;غ߭Igj#մ{hN=sQqyvϯK.JQIh|Cyv{А{ZHaBТ'GL-0'_ce{ i)mgYn12.OKXb$_^4k: Zb>HeNOJ~:0s6_xZZ4'Հ8e# ˅h2c+LO$4TF1h[,E=i)j7 fћt[mwڰ6 h0@i{+ *nim yv2Xbr5fd艆cu50l`}V:5VjvS3f@BK\k#C[7 b6ႆH0Z0;p׭}< #`,ɺV_^PFuSէp9Vb=;\N]Nuhex$XŌM2#rqqr=XV)YwHNnߎѩWϿySߏ?"sSէ⛏Oe~^*\hYv}ԆG֓;SJP]Mm;]{v kԸbVhҶRq,lܬ X5CCx@5&qj65,7ڷhaEV; iޘ u07(hVz;^Ye={6?>FԜ8?h92hkXf]OsMAF~L 9ް1BȎ RuB;67 nB*&{~L>u m},{z}$KJ>U%죯 >/'o<+H#0IkC8-{jp:{G`9KCr(KF@q?uOOa=-s! j)on{X=t!+ߣ\0ԧ& UfCnnUfJgk[[z-Χhe< t_fq3) ۻ8JށO=`j/ ńmZs~g>GQQz-kF>f'TƚSg~g"F@C ?ۑU5R0e`X;VC#YXC&jj3"ުmTIWϮꪝס= Dȱ0q'&|Gx> K9вA0龪: p)nSZrG^X>_iyFH&F;k~4~)˴:-]⤈fy_i"h1mfSЭv"}|4&b6;-۱A˿nYoQm\yWj-mrn<8kr1+x2&h,VfctgpLF|v>vnZ!lUT?eԌu`ѯ_~imٕ^-RBBqywq4 m{Д s}]`eӴtgq\G6£! @tȭ].=kmؤVYNݎP֗ l T3:b>#m%f8cGm new,Q* > 9Q(`0b.b? NZf@ ۘSᇥ%-}FVEIWTnRP"݋i퀅VzJ'Bj5lnٚZ*@s 8aXcȢQj7=ӴHA!^X@\&x^gRiST |Pc N92v n"E2LO/.{䟪|J?<78!eq_y>O&)Be G+J='s}Sէz[U5MDi;Cofްw(̪>wFqEuI }D8Ơ>uA0·Z= nAhi%j[K y9{7EK(׵̕:=Yc6 kB+_b6_Ty <E O('a;GUvP|]ù|Mӻi&-h湼l  ,gZn%rСUtU4eݭbuaі-@jQ1EJ̈Kaіr6d,Bb=<gLNcuVh=eC:n41[O fY%o6a+vYw=۷IJc,,~1,&B#|l~x,F|dv_VY6%\N.jce|V,iqxZe:VGJi-RNKS`q}ૣ&k΋Ljj׋_33QW8OQڃlt\~9^ѕL/#qaTL&O㟹HD&7.EV/;:ݞmzBf6r1=-7 McprqKGw ӑ'o(})?T'{_xͪLU{mZ]|p"^!+5}ArTyѡ3I&KiaQZM2 !& 02}zO.-Bs{eٲN3'/7Pc_a){] E!@ZytٳȨ\m`Q?XX9<|$ZČ Ә/:WK9l{Wqxkԥń0qfY˷LNtH߳S?L,*Hø4NVtcz)5dp3 S4v<8is?ug2}Ar$oiiҩ JE+[1w/! ^aޜjN"l"@U^u=~sR]a?LY9OXX ENr~}Kޖ$S<rٴޠYQ-og n{oQr7q>sTp W0YC|:z/xN'EwkyG~Ͼ?9e\Zũl,꣐*f%Fܗ6u0GX9}/oғ8&AԗŦT(mSO)mm}(ۙG⃥BY˴&c6DL,IE)1AbX'-umS5曯7ټCJ_փ:R0U}a29:}H-?{*>d+C@ >(nFݯs6U˺Hr ޿r/ |` n?`U%ÂcH/YSN Rr=cbDmbAƩL%FY$:`鱶6abdk zAJ.kqxkz./9.'ykg^쎡{?*LÖ]77DĤp !rM 1A)FRX ZrDb+vd=!*:4UyF'x] XSl:#Uj ,qf~oPOpSG#]X0$=aցEQI_utl}ޚT'h]>]4&{ko.ڵxwb1Zw|W^sKv:3H'B#GE*ey.qAŻG5i?o/嚯7 /}Ώs2dx;}@aa$'ŭ{CDrY#!Dc[V[\$.iOu(P/%H>D=_&?waFu-p399 P_{;PNStu }_>R#P9%J$}zu44U[-$?[T'oo?x,Υ6+CHd|I3[W܁H+ӚDpJaV7yK.Oqpm.dnk @)Zg9LԳbwz {;3Ln6ٴN=e~}UG8@(g۴W9Z}5T*C%y!jznTKrSFr&ᘡA0`^L"!a~ 6/[n%+%ETwu6Jm\ )K8 P.jKFk|kyG뾥}\|4o-s)P)LQePZLXImuT:,/)2,7x0Fk(:6 <$D6{Zlsɧ w. 1 ?|(dn {ɒ u0bg b4)՞up杽IJlDP0N dS^Z-ulĹ8wd9l _|ujV|٧h7/.{j?PG_QbvN[w70777oEMr_~Tw;ynvI:ǩףYPv4KFx3N +}Lb:T [gMZGkKr7CUD6T۪?N3KYH5iVcURfƪYGKh}>9ac>Ǜ6  3ɤ?L&hx^:[-'*d8>Z{]Z,r1 yD|F(Ojc2,/#rņ)䘦`9 `0,#3Vuk|`jyWj3GrTtL!!,W/rJjd %DiAk6|`syG6Wr'/.kQkaZ`@myfY0SȻ2 sg~b}ܜ5Fwq9R (x_ 쇤J\H@YsȻry@c*抐T*J[4 1o2*}WuY^RFe]YX%L$R$gNH)|`jyWa9E_> [bsl45@XV% Ar7໦."6^lf6|@>^%cmX+!kR` L\oJ>j0#S gr>r/e ` T h|°62+)NMFKFCs]>V*m3Tbq"85F(8z L >nNȻ2YЋp)p FsGEwM-"ƮPb:1>*"Q248Q9kgx"B;n&_6'E]y%:ab AdF60z+H%ԩf8NyWyIܕh8C {39'Qd2ݒO&w&[s9Jo?HB”6oÄV{Zl)"r!r},gXE& 迣Flu;jcr]PF3予+ ^TQcXI . 4i%Ua}J>j0+SY| 8W\ĕs(@^sdcEJpD Rf2fyW-?sۺq>@fyYL̢o˒#ɗdDh<:*Wk=ɺy5UqXr;bssjyhbC|<"5yOSi^ <ax)*Ecb+wL|1}@#S*Jj-xj)uńu 2UR9|%?'0 wƪVb=ARn%qx;5F{aCFFhǡQCU[g}f'}爿[<܅ \T jC!W;JL?"XQC_Ȉ&λὠ 6ĈX`jD0m5ȇYI:In$]gEԍ()PEUU5ˉƅ +O̚&;C\.CU%Eb+o•c`@lAS.md^մX̥mjWy麇:dpVTjUq~AX"GY_ih<'sK9vc_5~A;d\Ok: ?0?P)'r qCUnډbQjQp5vľ=yF]p$-nP dL eG. ]'K_,o[@3)te-hF`H5;d4*GhE>Tru #uZx8uT訂1V9c%ʃE7nP7|G,7HWJ^-P d{F ~:F*!U* Y^$2vM|%zkz l>R/йYV9PL1up"bkWȇꕙ+.y #mj8/k#k.\uҊgU;_ <&ی5`{Ak gǶ". -)J2~_ybC>cƂ:B\6QCU[V3.qXQ$#=QUr^ՖqZe@ŐԮcU> CF4R(f8CFeOVC٘b%#zfPаǮ`R̟CМV5N(aN^*Y׽ "rM[1ni##-T,ӭq@ެc(LMdOY IҠQȜ\ QCɧ\gGEvCFZ @Cꅣons/xĆZM [=WlU̒K0X}+^1#ɛ )?*51UNX ye/1jD5:r:f2ЃJі x:aS?@;w`|0Ld2?6q&ָu#2[)Uq86EkE(шBÊG5>U1%;~F~uj' ކǺGt!;b(XMh&>T)\Ֆ%{b\@bwStFq>Kw:Ϡp \DC9_rѓP䌻{]sQyEi9ug(3ug(Fm][,P)3 N((ʧ˗X(eDŽ:dR԰- .=*El*aBN 330[͗[<2@' nUՈGGuG/LU|V=0N`=BUrVQq-NCX;A9k[RrP% Tٔ'5+}T(qq"{{Qi߮'2+ 5Y:D_.Y|TfΊ(ނ#fw`5(ZG.PbD@Ԋ22ςXNsdM }67Wmvpk;^X3+T{dg&I%)Iw5I [` B0.EKI6f{Nfʴ<'YQ2n#v| ˌ&v Jw(9.xCWuH qOjLOv:J vdBҪ v|RG0|eIYI5b ,/5H P` ZS6Mz qb;dTNWAx+2 vT4?Q9c c(Dx9|,d)^Fs&"xD@W-oD;ͫ H$B;C;ԞkCꕾ@59G_}0vg< s>ą&ą" qan굨Nylloc2 I-(V"RT yq2a"ILJ9S %I#\E+I`g*m/g P@j.9PZhn0 ־7HU JIUԦJRĽ[nUCe?u(xٞR 2w3{  Vx p-%[FAihPC>pCQ@g$o.(.{*;d\o=fC2{0ɺ *\ +Nݑ s~h86V#|)C=)0,G bچ% OmPjc ҠCFzIiV_nU*^ J .#ѦP퇂$}ү@T#,7PcN;*C!<čWx4b85I' q`onPUqǜ_-vjD9*oI*s*)$ fVi.L7:.嬢vvD10:f4BTx\蜡7tw1t UF2y} NjF#;K& ߦ EUtZR\dܽ͝u['[< @fPJL7~ !SwG%lӳM88AT#g j(S⾡(wS{%rR+ yEZQS"P5fa]U+7J#.D7l31Fw:1[o曻5.ݠ [ {A$ _{J`&e(|$r^D,E>Tl!*6G.F2JVhyj~Zmc6s <֫<%fpy>n[~'Q <^]itzrGOOb5I16=zA%ϸ.o!:npo^s:hT æG-CFu0 .=(RDb2@岬f,,hD10#:.\ E>Tf:SmL:bNr(F*x꺇!r>2*R'})z xrBPp{5/{T.CFy%PCn.iQnߐ|~ܬPM%PRԞtPıPW Ce#>ksaK8#͜|0{Ԭhܐ "%CՌpXp(伈f@h( 2^(Gat(WT>˓:(z>:d KA$k˨t8f^.( QwWar 5_*p½ܼ9jR{ 4}zPnz-eڟkyNA1ZV=R/97 0LZǑԸ%ٍ%sebAUM*bÙQf{M$ ATө|smLIp&HdrJ$RRt qE$C>TZk|~.Poa3 p<0 9 K\tW> qQ(DU[pJLȳJ_%l\Jqy"(^d!c:slkc)A+ܭ7?bf76cשZ V[sMh.f+8V~9|م\Ks'pz8?/7[x x.:R{m+ݹ/y:l4{M$o܃eA`}} *1UXj\Us`[5GWk11og{3v/0|:|l-x-=?{8n_1pGA.^|brE@ITnc;s~EKlۢ`.UQ,ֻHafajl> ? s[)k9V8˨ '֦\lՂqoҠDe9/U1#FqvPj-\xDJQG۠^Rǯq^l$zo\;\熌aw\׳}}y*& `BCSp0.CoV=b)9xr))tp)čȉXR8:LS+'e`wF.9\=xu. WҌHy&04wWLwULwTTb&\D>(Bշ㴠O}nr Bj@w10jXH/g~oM~>zY|BDw8uj9s7lJC8C,q"|Y.93aRk QsoG*.MCtCFߤ}8>u9W'Ƌ.T7#I)H3𩙣)I\R.fU~NV N+4/m@*;F}bC)F0>scKW婑Ձh&&>IaLcC!dŵL&>HO\v84rbG^}⊝"t "{˜$8bPUkqzZI7pBcES:F1rEO9qq#cBi-^{Ug"i:82Jpjvr+P]Ub\Lrqg'AieZwSLwb/Rqo -; faqa ݫ=D#z&][yKcTI^EMѩ{M\-K#Xމ.hr()J;8 D]8ud>M\9s֨o#Sons[lkrU߲!k%K+/"D6GHWp/X ^coOmk C%Lg> ra{3s2;o٪ /h[thò"q3zL8PYٖ,Ƿi ?hD4=b>=F|Br~ /#V崮l1,[<߲R'[ՕYyFpq+sMlȩ_ok:]\b,Vs=qp~c7?:mu1qy[-/6XBII2Kٯ\m>/'E`ePSX  2r &\BZ* ch6pˡR0 17jg}Fo\=8==Lz\td;h|PSo V{$l4(1.+ &U8lIil9 W&/?}LMl˦cᑷľL r6F`".'"IrzX"tиiqB7[ ̇#,HZ~}+C^BAr\&eZuPڳAQ+ӷ 1@o>kN /wr&("^(EPR32WA W~i*xr@U0 )  s+ƄCk̰2{8j۠eKm֗9,(Cf5Y̞7_b^N\pFuMAˇo[Y0K=ۙ&xVs2M4}4х-3X}՗OLIB5y#l<5#|}qrk SGF D$c u.PRd!R\H3U$9Lgnͮ1䜼hE$&QyVĶ:.\kPሒ4(Q%hsBU9G?H~lY'r|Bu"(w<'ԛR `~[xFϮptk'ΥGdKڇ'ǚaZZI)223a2sݯ3fڣi 0f&PYιE43ϒG?֊"eVJ@#R0O$7뙚/J/f@{M8X0N)E h`HsP9 :Ij2__s}}pL'EsU[ĆpPQҡ \~jF2( Kb>uO!g`vvREMhK@m!Wv+L2a%~t\xrS$O]KQnP LM2Q'GKtꬦ&K =ϲ$)ITbΠ[]B?4.< >Z8b8HR:$L _U S^:h 퓰o}xQA'0켌3p?0 z!/ ஋"zռ [ 65AȰc#* J83 ɨu2|Qs3A]ܐ>ƪ+u҂{ͮ [ZZVxaMX CRGS+H-W: Pͯk~Ua t߉>-OAWx@L=N9II`J>`HI ¨^ɭ]m!4l bo=ކm,(8%d!70 L Ac989yS!r,*R7YmSmZwr&Jc͕I*;ٴ ]vp҄, f0ѺO ,!J[49!ju )^̿Pn /38q’Vհ"ȰNv% M2F,NJ1)܅YBc . ,B<#q.XP @ͫk^w*͆r|&C9\#'"uTІMKΝ;܄6!@p")M,Yi0 Uqi$JL$''J7 @4z1F/wlC:>$d!."c4*r[J%a"-E̲D`!luDZmaG9hpⱚ!#X37'|l4'A.5zjx\ ړS.E-o!| [EftĚ[w5 ӅGQ/?ب˚6)YߝE$VzuTm̵ɪw~ i# 7:Y:p3ţBs&,8#pM 1R;MRT%R&YY6B y[,צs }0*_Uz[ip'F-u:XX. ?ƅsMB=چlvu]wah3_~˧g/A S;2Y CY~TGL`!' dyܛ ciP܏~,3(JfQLm{/S?E-\]uo\ܥ:SWEܪ^CPoZB0nN –/MFW;óF_F*W޵8rҘ7y>5I)̮0uicxXLC[T?ll:CȲhl?Ȉ$.QT{10 Ml*qvJy^(US߆obpT!$cry#*x\fCƞu\$$q 3 )S*Ę;*".3)+#;u}yoІzss|εAe Ap^]9'm+ր ðc C8;4ƻW6qOcj2?ZXo 7}+gD5D&dS"ˊG׾ځ2Tp̰z] -UCfCT򚐩;.˸#78;V\EIUs Q_Hujh`N S&;3=і= ęRNx;|I=^rd l3 J6"e q1 [Q2=`RAaTV^84\)8pCn m !JY %9s@N郙߇ll\4C-ݳ! /#9AWx6ٝfkKm_R*M=^;2kc@Փ|TW9Q"c ״sPw/cɥ&{q}7l\1a42VW#CAt$ׂ/]ނ :d׆bl1h;+-_gSjIƮ u ȟA7$6$E]*ײ 4_?[La WS͟PSڤf/߼z)YcqǎZ-,>ۭI)&_CV }m߫z⃵+1/Ft Γn?B|ԂxI*?T柍jVsqsiFVXMԖAIԲu`xQ^xQSfUE4r> b0 !h\H8-D@)78Soa!)a]?6?!8B`VT{몬k@߹7b~H ˓O"4{rt8ssVkc`e!YՊS'WU<i70_{B2ZJ\jzh:oo)?ESgJRHxea2)-b<`JN؂BRLQ+cujbJIk%J6zXYiʴD?͞550k #\Q/cLi "*zC*dI"xmEd놌9/1/ۜdƔkP|3֑mM;釐~x_C0G~Qe kFq]e(P]xRU,b|jvxo{ַ[^o.B/vBBF`8d&DFm`HƯ ~Oi'>&톥=ՠG=s|K} 2cVj/5Kx>kMe pAӦx$z6BOM r4s+Yę,!Bqʨ _֗lί/Ç te$LJeI<=N; opéH-(^Պav:}~ 10Sw΄ԭf._cf/ݓ6L[(_=bɩ~m>WRDT)Yc)U_+ųcf˴9Ȍ㮏Q:O]'6Lr06kX5="a8~Zop+)9 "Dߝ4aAnWKpk_羗!*f9a;c&B+.+*OLefF +KK%_D&MGoG3Omfa]b-=]^|0.#] PwE9`!#CNbٲ3rm/KWypi7-HFWwq_W%xo"5OԨ><|ddqd P.u9zH"%Zi+|C|10r>̧?=cIu\_lG&qrټ49//>/E./V?]Zt_՝qcT\5(2Y*J JO*([lBYFQSaC*js#;"" : XE2O%#$c&2}B,1 !BiCb5:+0_ !W˸iU8W%C15O[UkT2z@$2'Qᇸ('8c6jmAozh-C`A`Ie-Qyڤ7B3?E~\ xiTY/sEg"KQsq᧵u[M{0SOe{u>@\>6;^*DPF- U(" ;aI;{PXbDe ` bN#\= $;75,1*p6g JmG-Ivק3*o^myI]`//3s #y͎C>]ݑ)@­bm*,F XxAMY|oS^S;|q!)1#q ^J"(s,WyXJu\C5CivܹSPUR*k耗"eæ':t0F衍*qHji53 kO? %ѿ^2F b,c |g5վQ3Hƞ{ w.].myJ++洡ޗg2ؑ%%8#:>5(!4(q61< @0b !~rW)?`9JGG{L+D;h =g0_{B2vel Msdm ߝ?v'<܄-u,Ze5OVLzfU*.7' ԕr=RJaİ,צXq>V :Rbj YVIiO}#s;.#ܡTMy+q{QyI4niQ0BP[ZJ5Ck@Fo]TUk5S }il?m{;F*3F2hH}E09dyc!5'Z O!# q%Hn*.M.g}q8'Cwoʮװn^eB|1ҝǦf{yO^M-t->o$.{>I 2pՇ.ԏgM(-PgK1eDT@^Ԡyf,%ADv~qꐝ+@ؚ\yi|{łKsC#}:W76]Z Ph3KШ㥌%6u+#H<͜WPS^</=S3` B ue iٕ1*{٧̾+K$Jem{*oeM+Y ܬGd$A8D )87h1:"50j{G!+߾{kAo~xֺSG?q!bͳx}O JcF \ #v?|e &x륙G6^S^Zs s.VCGh\9x&CUKۂAچԃ;J,;.L)#q%Uܸ 92c fcT}6% !EB#O-??TnY4"p^!mTQ$JxgpPؖFYi4Oޥ63KcOE\U|i$Bt+`vNG\bV/&HҎ|cz'J4wIV#Jeo4$r:n 녙DQ7h9-$oB<*ri{Y拪K.JFU PoZ|r Oeo9 b039KǪ6}7hYry,WX^gYzTі1:,XNXVOGmt~,ax|X_Q68Ew+4zɡ?.@GF\[]o2/"L?팘Έg(Wrz~`#j0ݗFJdɞ jsll8\oX@7 ~4;NSESUS؏eQt}B1{Vtqι25ȁ_ "?=PLesꎠpR[CF~.9Pcl~-xZ:7v Y-!ctX#?7{krw sYL.딬/$N-d[ّWJNg珓5yZS{& vPtSuY#NAra|gnNv*=:NbLg-jh78=-fzWW!6_g @~g2u.{@XQ e, 1&u!3!U̽Y8.A91,a iz:7)K!qCvq PlQ2ExٵC{6 ?нxZ(#"/W5].P'|yF z. [UPVdvYryXxȤXX=[EO:U|@9!,'L&. 4]!y 2vJûɒa77"O]`V1?u^`u(c3A~|0P}*fr${m8Т'^5x+"LQư,e@Aa!YFaم ilrooר>(WG@"Ό1.5(!{*ؕJ}K$S`~/tW~x̪G,lgVzkūHt~||୽uP q=FD 7 >9㙲*Sކxt?S=Ry1hhC2^.=jqd_jj\C(OF 5WX>&>؊`񢲉gZ쿭3*[XoJ]uih ʞҋT -"1MLo,s}Extrrd``">O5@S8Qc%n:`lш=zJ;"  x ~n σdmT>HIƳP: fje%Opj@e%iEzִ׃8CQ|QxZA?79(TyGyvm ~<ۼvQ׳j=YË'U8SB< RQ@ZBUVcD `HI htGٴl}g]ew>㫏.%zTQ8ߧhF $U\;ߧ]]\{ ){AV}ZÝmMVRKNJvD%>p75nQ͊PEtOI-Vէν-GWRo#^}un9Gѹnh{4.|05o'F#Kkcumq{H;Xj|)ea՘}bD-gv4G|E?X|OZ/IM۾ԙ1ј^Oi:3f}=J0VJ9Y‘ A&@?S*ye/2/.Z'ѵ w5_rvE9(H),d-敛[ȥ^C޼z VY6NяnՄkXCMq~nX-lRY}w+tOGVO*ڐJW@*tU w#z{*cޙ)M*&/ݹT"X6tOAa%G|D1SDkA7lYMI'㩭~#D#®&@R$uu3u[k]:8)A3r3M@;}[`h9+}Ff4'Fӱ6䪃՞"p{in1ХvoH X_ Wrz]ُ]|[iYI8-z-!o!GnO'?-Rpi)>_v,}ZϳΦKaXMO )r0ڭ&_]˟f<ۂ)5!J^Lr1Sd3hЧjp *qs_~lzdog վyQVόc0[o+ ^{Y2GX}'C$6E! q ީ*N_Iu*qʓ,1Q&zf:/vSt7G3O@ram6s2YOWQvɎ6&i􀆟̜5>9`)aAv D.4j\Y_)؝u$(uv_JIΜTfcJ1 w)S~pj5ՌK뤰aX.,reIA,fgɻOi:)\+NV||o׿v)rO6*|XM+7L͢|f좑I5Q_%,wY7nk{3̔)l]0ܜUg eaQe $f2qNgn]υYz bS8/7y7 `9(kףw -.KmWWO̅/c`Z!8w8|;M8aa3jj|~uU40C\ic)5_Uȹ-Nq@2GqAoˏMG־xZoGYӼd5 }Qy1qc+l@~{bUf9$o^1!0b̕#4ZfbM&sׄv޽t#[ޫzɒ@f1맧_;`t^С࿴=MrzTIJF<=I>D$K]Mܾt2|'?| CKaikuMwcx{ԂUֵ{iӘ 1^u[k Wѥ}BW\۩G4b4:.;*lⷒ4-CI? %j%!˚Z`d ݭ EO JwI낢vrk//(F (bl܅.#6(ElzGwoH8\UJXX}࿌HeP&@U.rOꋵ<0k}Q˨[j3%1bF S`b]~N/t3H[  I_`3ݽ  YT{kQW3&PCt}:Jl!CA_ <@ KqCJktyRSQO)ϸRLP/:=wx2kpUl yRumWaӪlMQEY>>tKu;rqXCx4u7K!ҡOCT;ܱ;;P΁|_%=ߋ 0iյ)9T]:&yI1y=Urz*-8{29=D \i@V>Yɐi˰kitu˧DW+ }GK[_홲5[gyq]_)uͣ6^JZЦ8<ޖnJZ#3EUYWqX&@U%(Tg5xMQ1?X؉_B"Lq{J9YbCgqC$ҖH`A&Q}AUM~% #*%1,5o{"& E\ib)e3qm+3iBdwA !k7`4_k+UemG$ӄ>a QK:)&DEiS["/x'_j'y3QZ{|G> &;^D:@LA<} 5eGY01ܡ1E $_$'jgt- >ᆝvvA1RKIS,ObmD(8vk\88R4څ?^YT? inV̰,l#Qs!Zb k%f R%8KH r`^$VL.%.__.V;24}iXX=GV!.(w1(Pkb eڠ#(mRn'Ҡzqht$Ըq\T7p,*VYi<(9GTCuǰCiPuzVQl5` jdl ;)#DeA1~~oy<'p&8$W4=Ƥ>&F6np&Ɲ2*u:8 G9m8XsP) ;(` Lӆu`\pxY}6?mr7Rq j2"jM&~)TSu]ef2n]Z*,TQ};z5&bk1N Hgi8Q{?\ f6 0lstlKtom`k5_'pr8EGf>p&mQ{48}vDEF)GA01bL &: F2w7GЛ(sx`iNiQ O$u6ҪO J:6HAF҇{ݽdq?V2#12#%qdTaB a1}LHK1 X!k3LB`QoM梢z3}S`Zr.{3 (fZ㑋5D`a=CA8N"smdN! L}6|:N8#4qBLnRyr;YrOLRnxʶ~?gL!! {T`d`hAhl`I3HiҌ`5_[RP-M99&{hĚg 2ca,X8bu&Fޱ}>0I:H ;1p=P&pfWjΰ*0{ -c vA-ؖ`c܅Iu ݴnW ĉ\^E ߒ<ʐR ܵK"GR8`iu)(XҔfqm% L }*7eU0/n879CC?f \ >٢(pGh3sc *)^VY8<]>K K|ogߍw jaM7=ߵm9 z/6xKv|jt6˾p&8GyqO8F/83[7X&)éE&C 0Kq!qn)$AO3h+8ͷc{E6$W>f~%Yn}fqWM2|l/$<#P!, UOuWWEIs%eFeG-"XltWmש뮕Ij W<CAĸbȫt;8s`vMN;y-}fV5T%VçPI5 H<fSh|^7o$͛l`֛InmO? ׿S+7_m?.Z// 4>o/>+n:Oj;Lx<7w㺜yT\4_>%uO'M|g]%bu&ݤms:)&x"mQF0Ff~} ú12s.N︤))ŪYr,eat62)VG>EqaTIR`@%"\OuFΒ91y/+W)Z{ jJm-:*kF0_>g.´12s=(c);:ϖ>@z>#٬jD׽Nwd\@}!yD ־_ ;'C$r)۱>\%m3$HL>v3M7>)$c2$ oF JJ&,8P9jwd& o=0/X:} ]òV VoE9{͎̅͞q}빮O(a_/kB3 =BDB=;xlcdprv|)iֈ>.+4tȤ`,g}teJL4ęX6v}aG>]Fzk}hEh@,TR`s!wkc\ Vj\AvisH#ӇҜIѬ,6b1k#ΌG);m(2=u֐LYq;.I1;-/~:wgsy{3@$ݳ^2@DYuu52(-^pճ3f55@r8 {[9q׺6*G_׎WNuK#3 5r0.Q-};IguZ9kUtsΊ;JѻNQ^ֿ(%ҩ{K#3GW(Y/ hS5wW̓e/ץKK(A>9ZUy(0 uQ%.D `pt728J%lj@[N̏ƔvvmҐ;O6?2W˹ҧqgQ#[X>Ff+ThT<vwgrbSRsH a8312S=j}d 7vb7}_yɃ|9EK# ˼y+RVX`n `>f?͑?Be{3S8:\

Lכtuxq$ȫLu*zN^PyvS\X1@a;;j[̔!2B4ժN{</j`|x;axbݐT}QXydl)T~PWp_ tgdg4'ؗQ@!/7]>Y8>K/#Z/{$]Jy>TI_Nr9T#D)w}X7E76e4}luq/aVM-۝7dlmŚ LUz|bnR"Y~S&[Mw˛Kkpv? XXY ELIx,N~lZk?i,}LV1.QUJ(:q:?|K^ Km }xÓv=.phq9sy}G]Bao. 5sN߷N.'hd)r1os> '?Սc߶sѕwk~f5U1\}h 3]1.qc( Dh,Pf omTDQJOuLy㩉ye nIotGY4-ukYm| zz~qtA$ݏ~>()-9 25Ŕɻ9fM"U n6+e(oZ,'5iona od;]Z7q9fL%^Z-Snd]fMXNzYV8K)ȳJ, s|gy{Ԣ̴VF}ɔW|AXóx;SHp>LfѺZn1cTo=SD #D~4P vJ מa70K(Nr(ؔ)mf5[ۻC]< t7 !--0n ?oZqєݡA!ez[MqsӡcÙ:صz_`6@{M|>t|dvrӿ>TY4~N]%Zģf`]JQ4c~[ZVSp'5?6 vR2kd{hvWtdA»y&8gDᱤ\a*o$8u\0{K }ߟ7;XR^J/vN1}ړ+bn&!f8"5x$gQW^RG$TZFt=;/ID5  WI&>9/F*&Q/1wԴ1J1HIif?O4) s,Qkි\Т,`N!Za 0jHjuJ.%H(3xC-p>f0 (0ՙYJ{Zk \`s 6YY=%hY $S=LZXp~I:HJb1c3\ i2( pX-H26Q:jUH4%!`ބ_2d5['H5Ʈ6d\e&!9FϘ*he_!A+cJNXU9ج cV2L?#&"jUB(bu5"-J! 1PC=N*㽰A%a'%O7[IeUIits$G#fd^C˖z F5cKFRKewUR q%.IVg8: ?/qmZUb1CKj0)DBQ3b xeR11iu ZE+XOTQ>] <nCHeF+.smsk([%.`f` %j`$uL. ma8 {)VkPV@SII192(h>Z`-SUƣ lxjGF2a ׃N%PdU*ba%:3P2 =! X[po#j.uPmPX+ڬ0Q #F$y2+!ui"K2` c%aLfL0%;)bT܁qF,څRFIl@! OWI%I f 4X j )EwqwX "zl\Rìj@oL >~He!VZP:$@Ae ^C,"+!d`$P;oLoi(<@VbfGԠ l!&K(!N &$t8xR2#{^"i'VR"yX aTRZ @|jTKB(a.aaZWQbP -.`~[Go192T3.kJybiU]sH5=(( P}BTATFcd%W͒ԥpuUZ!lHb`,gBJ)#߮ 0$aU lRF)q@~QZگAX˚.z%Ė,,+ &MBv%NXV:hߧLAvv`&TaVRݭ{_ 2W1`cks.h Bka%aƓ ^jD i҇5`d,p+ NlB x00.8&1 N&vb%%.\(%4p2԰,`}\DJ%#=4m'VhѰz$8/Ax4d5qnny( A8GO NdS)V'fz8_ie!oD^%Ѧ24<,QNVFEDƑ_dE~ugx@m+ɒF^VpN+ۢZOʣq>eỘK2a-VĵY(KDD;^l5A,.0_.$A:F.tkR Dh/SB?B,CJU/a\h5rl,JWyAiu|6ٕhwX݂6VcF $꠴uIG1ADгyBDElԕl<~"+V9 vD5j%L2[-:T;a[Up"Eh4V݊ph0U!B|̦Kx;2\6Ѧab䠬[<;eӼ^nޣvvz>Kk[f*  q`qtFhB58)-=ّdf!u`k̲FrDr\Vb4pDfƄ)oZ4L[ 3Be(·=nXH^`;ʔR7G>y(1ڀZ}L h(r]ex*D=R# !K= {"Ԉ;Ѓ?BAַ]גaӬV 3@+B*byrL4ַ7p[.C  #:vB!eK2r"NUQ- Ywbaa,lBm kr=R$sH~93U"6V*rÑZ;4ȱ*x"셬̤:D0rEIP՜ZO@N s*?=(ߕN ۖ?b .vEUXd00ܵe[]`p+=ae R H`+lB??M)a,%Cqj)5Uz*Le+CIV"3^A04x&ݪSVUvD,0);&`fI#S$8lbm;0U鼩-ˏP]銂 [sCGEW+;_5L# ,gpE0XUYᇵ%| {Y)"%i˂vU:\u5^^]:Pb ` ZQC̋_k@`Vø w!l0>[`Mn $?ҞюUS;?MZE6\toC93>yr{39+7#ט=/v3jzPڛ"ͨ\qV/{l±ejnW.oDI|Lͼ@b|{RcNm3Zp> 7[ǧzЌEϦřsLzRLqm'UEoMoɿh|G_>[7=|6o߰v_ǁ[qV7#˧as t-R'8-ӒӤ\Mab\o/IDqr>VNx%"2mg/*'= gO8{Ξp'= gO8{Ξp'= gO8{Ξp'= gO8{Ξp'= gO8{Ξp'= gO8{ K)5`b{>=0<6W0)0KhpU`]"XE.u`]"XE.u`]"XE.u`]"XE.u`]"XE.u`]"XE/֥5\ky҂W ˑ\(l锑؈4@MLr=6ˎy:')~GIbª ƢRV̧v;|}i!g~" hco-䞳!2t2-,͐A"L(&Kmin ,9kX[RҲ޳w?{xg>j_M>ј훑.Cd|=Y!a2dz+LjmG IE^BK3o$//%:5xDdD5;x L#4)ruu$LwFKt>z!1-%:jXx5Ǽ#.\v%jV:TfIfՌ ç!꙳&]%: u)Q:HcrfB Z [κB1 2`RBaυūw?ŰP5xP8\ٲD~b H͵sBrevyъ ˑ?(#3RI> / EN{0; Wijx;R,~9;nSE)BFl<jR%^\|o|F xh?p/ڦn'5c:(1 эyRtn}}}rzql\l-j .z3}t`Wgu8['>ɘCZMoei 8&e.ݧot]S;ϕ q8J?l&c+տ ް/?ћu0r>0y~:]_NjtQ\;ev|b,~+A].c֙ogOv^N5k3o|m֙z쎹6ƒ˟ӽk89s푎g6w}ӟ7*/39sH сNIJȔ 3pȈZAغYiqP#srvZ` >0fkČ1:"UgvwO^_/\9G1m[s2^b+N߉hZCIXNh݁c57(tG`{t93u~e#_pE > ݊QR^pCWz+'ꮚ16YtMM4ɟO[_9=TH}Zp_~WyX% { /ι}z:)M _NUƞn+O7f+2,wjV[\c ?Mkv!]&e~@oPIѠ_Ns] ؟tV`r=NBDknNR$Y a(\<9SA^\W##%A UgR4:jYׄxtW7֭PNf[z8[ba㲟srMlX<}656qZ ڗH'[x$)O_5c "-7vH%_P)v+(u|Ӊ6M';~ػ/Yߋ]Y_?7i^Z30uBBuJژ:D?_{h&]dx8]AۤQ+_}F._bo孴3Zvs#οv}d$c86>hȜm*hχ![nr흳oޟ~gmsr y"Cn37wΖts0x; A-&c)v9c;g~]9ӝ3tϨU9CغіzvΜ^|f٠~c>-ݳԊk!viq7Rj]>|P<ܛb*٠iǎycz˫uF%u">cs k_jTuC&s+ijȿ]|Fsd gHGRA,1C]RkA""=ɿ0^O%ȼ83уvz><ͳ6{Lg3":Y%x^XpFV M RGxXJ cZgkm8S_!~ Q&':8g4OlxD!HuWUUmMX7'>~ܱ}n{wT: XN'R%L"#u$`pQ`%C_z4~Ϊ eR9sEorP\};L跟M(5^]biaLMF~o}QLӈ/ʲaLt^I5t3K3_~}u0N$˸rl ԙ@?LsgOIh \'w蓮s a_ut%)uL\4ۿ@?~.&KMO@w7 Tto'ʇ!Q˵&?|f? }ɩe*a 6r|=eROG Ox<~7Uu4+qZw|g{'}آ#AV͇C]hْڊԈhdE$ʺMVsOQXx#ll}5*>ҙ׺-D/%u!^tDz@*<3"fI (60 RB-{LWCW0ӻ^x )7.a%۔?,xG7$/VV**Xm8R䂂+Kho8j: <V{ %Ki$9 =K65ZgT૚Bm_!xU=_e+ -ȹ xNMJ,"YDKi4j|Zvg4aCCD&>J0ڜlL{WHh^I|9lLtIc&'ǤdNTgp.WHh^kaZWdp3mp0ƽ5Z8ԆL'(0Ձ;s`D40g^dXLr \ U0$g,`_!xZ6CLŒ!K_!xg8 ô!E t1(>FBζ>u:8PM-̛ϠzU2+|5ZW¶>rI*ţ!1E$Oh,%$y9BB 6 [`UHKh/D6k$om}/3RE-jL'iѻ!Zʵ^5t9$/1YpF.+$4/Z*)r$u.J6$2c<Hhm:|QMI ^ ke(Co j$<^UW -_.n ;;k6imMޚ={Iý:K=Jso`E靿TZM('o>$~Bx3_WJ_OfG-Lw'kj%qiR ]cp.u:|uڔ$O5XήQ6hq(<_\_U7b-fsGI>R~Tot]'H`$S,%))lKBSΤBŒ XޱBL'QrQrȠjbnPk }_~>CKUuA9h5s2f00,K(@Th_)"DjCƋ0p6i)&/|@`{a zh Q@p]Q`EH),JV/rX 4% )\HQHJ !q@rwאeهb_~hKD"iNM-Dpov_"EKO[&fKZVr$UM%80BceA/n 7zjߒ i.J_ hF`v6 N#p?L,0|>Y ^Pd%pnhfTXc vd )_0ec.n]?ܽu> u2P*ǐK 6tn|v:_ 9hgC. >Ìq(tȁj*&Lr깚{g{ {_4UYB+]M~=¾VU?o. . Waf>_TfvWo:%kF|]ꇱgG+#X-]7$$nUUSmd),-xi/LuM@F!bMp`GVM@>Cˠ w &TJ+;vGHneLeGt+ Gs+=[![~+Z՞oNVSLK]P]BWtNWg:A|#FDW\JWqX  gJ2%*+PP[eP˳wutaB@a.]ڌP[Ǐ@Kq>:I^PlÇZ(,G`{G^z@7Eqs5wO2a$s_y5W5n~3q!0;| 9_RR76oFP 'coj>q%;XQ7;G״t _b]l..f{ļ3$E8s-򽟡Ö/S@~am4O놽 gǃ+-uKI,8Y)%KΩwi$x糴G-CWY-U1wfH5CZg1CcPwgggzm{jD4=*&ۣ(7⭵=fI`pj> 5;_}F'sCddae{6hwDœqH @b'hǍx9ϼ>p?>\13Š\ab4EyPqG- ג*lCWM>#hѿA-噮N,wR)̎֎&u 4LWHW:h7퓹Ka(9gB^翄ZxP]~{ZXe&(TS[ ~{7q8V^쭊WtuIӽJwV+U}UEuhV&RU}5VUWNIJXwR>M]C_v^ %Gvfw+}^I+ytu90~-5uqƜdɺ;fCNߑ=ʀ2f*Ő$TIa-ruTh*Jf,lX!rh`*,ryQѸ5v(i쀥$TҡŠҡҡFC-{]fϖt,ǪvDt /+TWjkj٫|ӡ+!bDt 3Buh%҃ 5T;]|pt%\@a#h uƌ;z LJ)1[P}VtJ>Bm{W>HW_ʛ(l_mnj1Bm9vB-9qb+Q@K}b[笴*LMH}eEal(?~qNiPǗ_p~-j6F$'0f~ƶF纒H${rvzP((tz"1BEnekW8ct7sp/%)ZDWXJp)m ]%5R.TRV\\BWV#tJ(K+]]]1U` BWdK+񩕮`JpuktNUBpGWHW"ݏpOt7r-نoؾ=7nxNczgA7ŀ2Tŏ%Օ7)c҈UЁ2K0J>͖.Gŕ6唑ި~ӫ.5]HCKf TC WE.,\gl$xf rof8(յg MkU~ K]Vje"L"FRPmotJI.j!zRɫTQSA^,\PMYMJu]] ]zR7;2u{$Gw1pyLU$|-1i?1ͭJLwzbz~*]vh5 m}`%5 BI1IJ[,mZpUD[*=RҎ.ĢoXR4FKܥ;n>"\%'p1Մ6J P"ޚ3MLf UT@q\ٚV6T]J7DWЫRk̕<$ô 7T_a@JӚ̕{[kr/Tؙ3W3 *4:>t3-ߤ=t2Jho:]%ଣ +TEtSWƺJh}tPjC)"\U+[c]Zp*+)8W-+L)j ]%ZANW ]] ][l](HZNW %]] ]I-X'p9mUBɺP%ҕ"6YW8ǺJhiC %+ET̀*?ۥJ1QΟlLޱ8R){ǫ =#6v =`ܖ[)⫉MX .i2gB̙P2jtbME]q B^7:gZ-%Aa=JwtusɅlI*dC)pW("XORUo74֤=4@-4 (&M_ Mn]`AUk*e-t tJ(%*]|Q*g)\q[*BOW& +9ZE1;8"-thYcvRgZ6-1$B-tОk~(Yg]]"] Niu 2EWJhl:]JP%ҕ䲼׷tjWs'- -]JP%ҕ_])Xy{bW*+@{P+@Yάѕ$DF_ߣ+@J_S]L`iVM[4&5N@B{ݬffU =CWҊEvq烌qWHc(Cr.}-4/h$0&ߏ0h4û^qwf'f0+ k~thf2ɴB02ow_:w\|?7!>h%<D)gSÆ3y4q+~Ifbsd (}޼{WQ7mlƻwǏ.~ɾLimam6܅\,hҤtt $qOʐ'K(RUAd 7< Ly\~ilfvO'`MI?2D3W ?Ϭ" FTϢ槚h~c^Q(ݏոI?\_[{TQ6N:½!9i"g+J"9 ~3IBƥ54%p)x4o%]1Y dchZ X[dZVy^6] A0H+a#4ࢩUWrE؄ A2ViǬQFYJykA@8zqy|30QkZQkw DVĿwܛp7f =!Zgp.0Q0W%dQR/IM6Gef;N|\ Oq9ZHԔ1h ؐHon5(zјV Dl6nbQ q6^'_69ohŪYzY}wg~v[ i8~C(qn͌˾~|qF  i!Ou?5v9qטG"h@QAsglrD`:A (At!k9>fq|Zen6ϴ$|>FRt?edn؟_0n3PKYDg4mpqٹJn<K#<|I? hhd?4y qW垅..v// x-<mԲ9"O{~s|JOLG eZ30{-#cFs+%6pkX]jrX}Giay9N`PqYiI(J1,,VJ\rqk667 |-uۗFvoaXݎRd4]ּ|R=/"GkQN(#t+8>jK/*Ffd j0DZtLvUԮ%Զ}Z^x'v/OM< Ftq@h&jkXX3`)$TƂd#"ᷠ׬}ȅhOu%דђ6b"{&CD)q:\M S"!&"Ŝ(aM0zaoi0?]Z.Ϸ&\;Lao|h=)=D܉oWO mbl6:=_:ݕƴ[~6hzs^d8Lŝd:N ;tvb-vsd\(XQ}$ :E!C0 r>@]~~, 5BA*L>9oA=nzf׻,\#8N!/>=\A8YԲXg1ݛg j  Թ MW+ië1N<~k 6CHʬ!3/̫cS1W8ǹǹ:}|[P>[ʞWcẜ-[ιo6mHe:a c04mDڥM9(O2b4H!#QIe`4:PO@=[Q~w 1FDS!^UBa_ˆ*񌹌bS$X(FaacpYRY,[K<0ttӻo:05(W` #ZAtۛ c+‰ ?u^䵾nݡ^n-\n8Ei 7AnNRnNa \g>fwH$>[%-c_Ov]}kKA.7T2;H Zu5ruqd Bq`L,aZһZ6̎7ۈzIm6v;ْmCy-nT5;-A=nbV(s{N4V棹vk5ƽJV̪I% N5v8} oFc%/٤c_=Xg~ţy]0֞6I@_OE_D@QkQTV'-NZޠ7y.vuZ9lOOcwac [bha!(롊PԵՉ"d_*w?eʵjIgǕYҊJ4lnqwc)b) F J1} :P󗤂v",LDO.LDOm#ʉAT6+7HחCpsLV)Cr68iJ1d,{ X !LFN3Y+g2*p7$&ɑ>ɮs&O|Ӗu}F8IK@# p(ݻDo΀ ׬f~">[8-bU{7g3fVRi誌3& cp̓Tf¥M_P猓:ΤbqD_=u%MG_="WKJX=C 0 o)1T#F),2D *0Ttx.3/;-F,L2 [%eqM,T"9MhNщb;OnG|͉٬\7ܾ(IդjIq@q$ *Dj 1`ė [QÉbr.Kfjk30bkeaY2$U$$if9]_ {W]/TIP?=ŏ'fѠt#tX+EL[!cbpFK-Mba5d" :Џ-EW\s_}ŊrӴx6 z9̏tvgm]˵Dh-G#Ggϣhigy7,]9| _jV-,0|b~%,nv8Пq5(:p۬5-\n 8fxoPm+7A_$ x, ")[aL >*0Nļ06HӧWp V=5@>1񾊉M*m j9lH}zRzUn69\MW*'Wl+r8OnTDȍu MbF&ASFZi ~Ě l뀵ѓd((m:vgcQnpm r`@ha#mXm[Ä}`-lإd@e{9 ? #lRÀc _r*!)/aDS z Hx\F֩TRF @B#N 갰1B,M H9]LErLo g!b87#ebsw>LxDyLZ_lخu_gw/'ѳ>$I]qUGpoPͳSQu2z;{4r_G ˘FooI0t>˹슿+0r>Pb_zgT܎2{+,`yh^џs)7ѽA OcɬF 0-/T!g5kojo,I`q}I,]JS( ,ZuVaOǸhcrhNH?*X0&FowR}gI7co2~DyW1Dǿ|^$?~EgYA]ٽlC~}>vqOO.wl}t->-@[H8i@+7 :06WI:]vQ܋w=,*;@˗Oף)h =aAQ KШSzp|>-d7liO>Q>UiTʮͿΞ3Q'nt\9?oŚ.4]R5_\׊n3dAcVGU2)4Ayg5@e4cg xN)3O巩v,\0s6$W78ZeagbC:v|Vs/VHlUA/vRa%$[fg#kK]X6R.a)ʍU<('ZƔĎjiu Y9˨,q©" F$"[~Kzt->BTy!&sAU5lEr6wW179RwJ,C1L5bBeH`C̄5iC9Pn; vm!]!nha96[fq(LWW^Y.)d|:d<rcaуjaá5ފ:Y PSTP[O7*6C+Ci}m;Ei 7AnNRnNܩ7A@X_}8\Psx_&JE Zl/"_@\/B2`eGy!~)F1 %hu`pipCj\T uc!C==D V4 qyo :5\[4ܨ4^x19 :KxFD,9Č9LZff 4U$ޕqdBITǫˀ؞M.0Nf1:%(CR`nR&u4%Rjf{wc5pt}Uױ.~e1Q"i/h-**>CV 5PDpJ۠F1vؑlrUV ZO]\Ypac[2˖,@n~([y{>U !9G=R8eBdC0$R*&Da+X{)¡C=س|eCOa韘77;~ ;R(yGpY},.$LP EMb`"|L+X1;>I$iATP<*SKe-Xģfm%rqa ˜є(S<(`)5D%Uwef^PDD⽛Ύ¦pmK<[qp(1,|*sAPK^jkh遽(|5ȹpneͣlߝ8J /5:7S u,iaq01,x# 5/W8XAv3^9JDk1C0ȸq.@N%eL*$T(gSG':*E8}͉#ͭn׋l\q) k][R0 LPY/'-heH&0 e,4Mp٨]}*ޝ*\î@4[NA|wt5B^Vy2j"+nwE'+ؼˡ y=[^ȃKnV;le$Gawat0wv}J/PKg,+Kf??; s*E `<$57:Eb xCt899ޚA|&0[€O)|죍xd4PmSgI"vEWA 3tZ \_w%_ Nr$͠NO3\Q0KzTsp4כ_+.ƫEܯ|N>ҝm~Ry80s_u>fupهs@ViW9Stuz?NɨַVB LLo~ZZe3.KCdvZ˖u\?3:P<1(61QM@w^m,J ^"m_A$mqKȟY/Utq[@q$eTկ j\Wt9?{pR #rZtK7L0[/3w<Bom'-[;n[+ӊLtU:ǁ\MܛvUuDI}=TzE:-Qv݁%I WݧهO-w̡eAI댹'$'v+ag+PAGEueH)NùR%ҡP)c$pPp6V;E!2aeeh hcy8w<5Z㒒ձxwEg\kՃz|o)pGV\PC{W+ԬXUWJiVy*.U882(P 5fj Ǻ"\Om*<J\Y\/OXV,ȴe*S ZQPq}+'wT̀coqgU? _vrN[ S_[F?2gټqm_?p8ʇxk5˹sR]ӻ:wN*Mi,h! (19:UhRt/h:Cvq?- H8j=!HtBy ]?\ ] 7\IM9'-ᬤ*,ב [{&Lc[JrZзo7l0ͤ}jz@+SG'bjKg q_%YsUYo'?>,k۷#g׃4蟞Uk.yz𯗳|@[, !M=I=]4uÚ1eb q8*`^ŬBO_g&y|M6U E{#,]2ceqLVA&Y5 dCdU66Y=!9 ɧ~_~'h|?G5OsqCpka״Zڮ(̸\Zοz0|O}60N{9sOg*jǏpQU;O{[QeeT3WUQtefQƄܡl)¸k J͉c GxWճ,ŋQJ=JLJD6Ox^&C:VD<1"ʰ;=u4G)nFD`4\hJ$0494,e.}JP#L;!sF]Pc!^{mQ0 D>B˗fM8і=śډ75%߇|}} d9V Re}4e[=4T$Ҷ_|{tʤsF|ZĄP̧I=$aeJb Czw յ~69%jr/j7~/$зjߖ`cwRT4cM%S!$lAu9R˧ۜ lx2Ҕ>52(An=c [J,UL,`_ ,w(C3%=O,ܻ4 `X^EB3Ƣ=ϤM%AtR>hW`>> !̌. Tsk$`OWIolVC}$&z\NG4H6F[6pZvgJ}:Ҡ%K$% qj"DD/@]RV}3*HA ,c\PE6Q!_:94u%EYV(T5[V "}$V(<uiM :#b iM`Sт)G`\k:x u(o0Ϙp0".ZN-2^B:Y4yCc͒c]3`m!CwtѶΠ['B;uo݅g(& xyqT ͬowmI_%XaMMq܇]F?mdR&)9vA(C#LOOuu=~]ep.0S0W%Hɐwey ;]I[S[ VyBs֬e%@hέ>:`x_fY1VYE 62( F刊`)!uPBژ t2M^o"cNP1H>Lc&"׋*2.] R$ L}bP.d)f)=0*~ \~ܝåaJ_"XfZwz#<H}Qa~T zV@YzKf\)-rUBB?0[\R2]pbUۑ涥(`˫:P{$*ciT<}VެHWnVfg񬴍7?e%.'RٕEI'Hcer=] < _G>|)gvSn"+ɍ/\e9JWs}NԚ{.mna,9# F@ aegXٹw|\[eSa՗!ʙ7d;+ҍ]볺Nv+#wݪ[[zj<݊ x$ )[X1c2b=6Bj4BZ":[^W>2|s%Pܤ)ǁ!Igk.()U ҒPNbXXH.xw|8gyhc^fv^o{>E񊞊",Z@6i{0 \ϻ[JF(JH1 N9ťoW12#XP1g&vk9 ڵC=E%u.RyXNU3l| 30G J]B:`^e.D3Ұȅ4*Tpu4HP&@kCJ7Z{1[,} *;{mY6mg;@(\`@^L~`a,`VsY04\`D$4zXPiZ/b$};XH>(LSR,JRI J7D0b΂wDðqd;{ZvC6}oZ܌ pEkV~4K]3r Oڕy̒V y6li6T`*: n![?bC1d#zW^J.SOWT qk#E,g:\B@x@Lj9*h/*P^.VCg8Q;.ewܶ!euC,è>鐔~ؐ+Y2jG|Y7w?d> s[ס ʎ(la<;BɀySsTa'ٛIg3=W:4ON!й̖!eFD3 ﲏg;RK%R,WUp`ϧ_2ٷ'eќ+:%Æ?WOWx4[AAYZAi)ڛpB W504Ҥ PݴV1b4OWm?;@/B `r-ptB-(Zv yNx6BCڼ}yJ#j |k2 QoEG6N0N֙$`:.]ptm_f ~.Iq\?+7՟/%bLLm2iO2qBLxZste2Gm^*fZ^K^seK'ZxE. ̟R9c6(s@RARL+m-OO=HdCfkAm /?mB+;XFS]Z-Z*=1>^j52mβPMbLr1q{%'^T^gשvYtBB|]wWN" 1d,܀hfo)&$=0*~ݣ \~ܝå闽VH70oSi؏\}Qa~TF0[[P&[8*Lk$(ϣW  6Xp Gý;WMocmKLQ2yW5f{K5M!i<X4{S^*>+@MVv~Zy[,/'iz:^}{u'+q$i̾L RS\ez'_G>x%B+ Xsȥ/*Q4~*Q`^lŖbӞklqZ%/ 99`].;*YaUge쒪AW W|>5♊ξq-?XL5Tɫ xD.WMԲ|ǫSONM{\D0OFSYF-OcvR2WdsEsZeyhS\^D9Ĝf-WIFr2O9=Et5U6˦y S=s SL )և[SDŶu @*?R˱`ssis "%4AD_R5U;ߛ;\m.+׫o =_a¹hN= -&G5R6L%Qm JS <(9^F%i5Rb!Qx*J6qS;Nnƍ!r7ǼzU.v6x#nYĎ:*wRІ)\[Ҟ2\X"TRG)yJ~,TBXJqma2F {x[dBhoD '*c@L6> I:FZ~L3C"݆82rЂ\k.V2rRR`qt/+0LsP-MH%j):vdtj/Qf=2W`)zc\DJTE+iK"+ssGJJ`^ 'tKDzc7*QK~>Q9i`i*Toq2WZz` _B6 Jh*T_P;vsîhDV\%5ꍹr)}1W#;zI +)evn}\-$AmCZ=R<@)*Z{gq00I>A=ɭj#b*_ЬZ[O*LoTF= 5#(wR׫px>{Oൣ= 0mj_3VQT8lQ8(l91'd$S4vyV#3<nN.K39Z/`WUjA.)+"42k$Xcfa`ޕq,2?I2T߇`Q6<'E O+IʲwF);TUׯ%0 ,R|PR[lRTy WG4쵤4z >JriwJu2iʤՔIe2aQ lQ"0*=w2rj[-*媱܀d+YΊ#ē~:ɇTD j ϧEC^6W,m}t\! -&WW3ؓ!D!e!x0k5f,`ZF G!6MVHKĝ Pgl߭j'm͹ ]KZ!Ehzt z1Tj[S pcvЅ"ia1*  q IKB9UaaR"ڠctMꉳ:hW\0Yao5vY}dMRYSxM(֢z Vp).}Ԗx j0`  [r8tܵ'w-f6JoOT~XdS}UT7l`t=Og (dRqPP'dJr yIEt$LEt!"'U2`vr;eE.qVq@M3Pe  D9g-0!<0G6Hc[&:{F<FHH jbӥH[XA`IrY$53keiuvL ,\"j': gr/i,YP^E.wa+aw_( Jl`3&s::"+֫[^W{yszwqf &?|%{?/0MSG>q޴7[&!^M;'X/ }tqbp8+UeWAkdQ^~4<܅t_yTUm74Uцkj׳-40'zIpeRd~nyF;GM eGs9s;O ~N&Tprr+y`‘9,G@%Whnplh3JW HFǤ"03,*͸Jb,#FN<ݕ J){YS3>O!RQoTT{TDbf4ҊZ#G#Iԑ} tkXE` |,8g^xψ%4EQbH2R*KC3{byp]lg^`KbL9?8s2ߊТprl1?-jzH[ݪ2zf<q}_ա ݧ8g K,'B ,cſgᦈ]OxRaAX󲟞K߽`1Z.md`0U~v"sެq#Qedxѣ|i"L * $ξYw%4;e? ` ٌOcx3>Y|9\ &m,pPO0ٯ{Ë6[Y"N{I 9ME ]G#LQP4\Mo:S:>3":mqۆʥCDt [}(C aXb62$'CѢxm?J[v6&ޣUE՝D`4X$|,u2 ښY5̂8~)k XV0.߯S\nZf럥<]0,U? rPeUR`B`GW:1|r9yEM&"Q٪yuhNlCn=n7ճt3ۙ.pw}ōWMxw3=O!?o)T浢=y" `b=(^%1S!"lp~{lC\PqHYIF5 ub.Bn%M`AP!&vav5'6[wFk6dՇErhU_:NyX^/ӻ@!jcF&N|Kj~춹mka7uֺzkG ظs8&4Bl<&\dBZ) Eչ%䁀R5R.Sp) wF夂5pLF_n^^:ؖ˨K^FcD2Y+냉KMHW(0a2.2],|0l2ӓU*ݚaw|nfe;Gwx\rxvae-Q+oHgBk9PcCޱIy[ItT+"`lp(Pf $HP1鈝tĕJ/HfW Qa]U ^q}&u:jwt]w^$Nm+.ISjڃi֨:Sl{tG6;,:9fٿm{eݝW=z^ibsOg}VoXiΗ?ok}&_oH;@> |ZL>߳惜+[d.,4<0smxn23ŃOnjߠQ;q\c@[cR;ĬW逰 ߜ <4XeeTY6_a ~"ކ,s0f.LfPeNwb86Is˴Ihw}Ywiu8Wj=;&}7j۾9ח5QErŽ.UST5)4$%V$ rga7_ >8 +-[b ^e6 ":JYD'<U,ʜyc7:\ϱOi}H6loͭ7Ċ\lGk, ofEf/Kvk!N^~U6OKH+X0ݷ 2X? *z%ZW Ǿ*e%I?@_I %O>tlw; (< _ϲ(BF [_` gLeoJ> sal뼶jmqZh[lf{qt))`0!HXBd|AV_&m$`E(4-D&zGD}^u:~޺:׽k= C#t9 VsY04\`D$4.A)!'%%b"5b΂wDðqw n8<׵ yZ=ߣBYE)k_>}J.tc'f 9r̒.ક6ǓzXmOu!ѯ!]bP1tPeyJ9HY.>?6 qk#E,g:\B@x@Ljeg|⋞E|pQE//葷*JqTQ&wEZ)"V"D46bnm38VM %FH<\[q]FWqEE:xR\2%_JD q‘QfMc4B\#dWnuF_tU-fQ1Qǿ1]ez|/2M:%66JϢKž hi{zcY\ yx8 pk)L@yM>jb wpUTsc%bDnnz}Ϣ}T%[&^>lZO>+Ȫgӷ|]sau$*jGg8 Y/Y3[43+fVYg ЫVϬdcz>Nr(7l ?b=y>l\Eo'j>ir-[z;,Bs T74\ٿC]S^Sr3wFՆ{޽fwo:zb>svᎇFFLLnfpFFLTU"ǡY;=ţe_ @'}mVEu*8Ny{J rMrE)̒*ЕcpkKCSKę 8QJǤX)VjWVpV"Dazubn{V,qW8.++; wۢdim^5i~[z++ݰr|anXVaj#('4w!ʷzO(o"X)(4 \A^\Z6 8!ZQ^H˸^H;kQٜEz@o$׮-ti0CPH IsLI3vn 2v&$ڙD;hg4 X u0wiX ]-NqFD )I M:F<׊8z*K7o`f[{EB4MrZτRࡒ0-(:n,Z!s"d$\؄ A$LViǬQFYJy>%`kY?=O?nW!}Ǜn=;;w4W|gΓ@xA=r+y`‹qd;cQl9$%Y+uN0y !wD@4BܣdNF03,*͸Jb,A#E`F ܋=Vk= [b*^ZY"D"&(A5c\pW"bf#Vj<$cp:(u3rYڟ5=DXŠ*(*$c#P! Rz=#\EmYUm}w!a(m͊§ Oſm UƲR. {Iw[%m'7qӨ*wtxԬDͣ&B+%Qo 3pS^\6Il")fZ/79r鳏<՘Dݾ6"yz9s%1ן:O^^|:̓Ze1'ů?|, Corf~wn{wاU(U,# `)"GLBP;2U.樚W8 C] N[lBiat}hyә&FZƃ֘Zܓ'4g-KZ$j&ϳuWG^ūMoWpbZ,;4h./[k/͙wWفh`v0(gR};Kb4x5-{-CIv>']t=t?F, GPZF YLl<'}b0]r uO#A:ɮv>sINK8"c|YY1MZsS˪o6Y\8 w]鿿>B|~oO?~x0O0R!'ᗇjoe]ύq=~*~;\H=EskO!$|ܪp4TݢWF(jB`+yu{;ZwKbY?h9X=C?g%/^c,0JN59|~̆AF=iR;_:S0_ F y`>qS1"O a7X+"5as'B憺U}JșxiS[gScM`AP!#M~֛7زkJ;Ӭ @҂J|B(V^:U>`\ZT>sJ=B1dėL{ /!zZ(Ph{f>xDZc*jB"4SX RE 8眊9 ym̡a9$$V8m0RS-yA)ıH1WOpϘldr<=0`_kNhH0#e/Ҏ $8w'Či͒YEE2hr_}5e4x !T=Xˠk~蹹Hҵ'r#[')wnb0daW-%zCH\`$aJK)P5R᳛|_2嶿$>*}Z2QjOk5'pÐaw4WsCn_,1VYE 62( F刊`)!uPC[ḿ d<Ʈgq`"}i $1S{$^\ qOi|ͰnVT_Aw5\r&JF@8l *}tѼI2P0ImsMG)$0K]ԗj̾ռo}C+HLq`6ڌ/|9yP`B<UX-zg՘IDk1Pͭ"fG_/%:z`!pŒ$R@|PªyXG#xhQ`}XRDZ7`FФS\-׫./cLs}xv=XfA 2J毣qsNl/n*7K5O~sAi᠚fPhd0'ZK(Iv yI0R@LBDN>lJț@H"88fJb A|2NQN 2GM*ei^)XֱWcZadPgBe!1z ",iP%LR#1s:ևueeM˚KM;h ہRlZl#k{:Tl^ΕRqz,٢ {@ɕ[iWr~6=TPix`,(U2xi2%ڜ)$Ź`2JvZж.7nau݂wkM[==I^qGQ1GW!lU00)lhcJ#68Rx_jRz\Tknl R} g3Y5H{)Tvwy;X]rݎ:0 \4_.bo_6: ^ w^eF0 > wI=( #*)dF 0>Dc4)ew[])*HOmS%zR\uo;O%ٸSqQzn%_WC_H T;*p]rh_LUP7<*N=YA&ﮯn5<utLc.v_}"?q ]j `hCmB@H'ʰJM2]Op䃩>^8p8cOn4`ht?Ad?J&i}o6Z%w;ڹ~(Za j]5GF^ā9Qm g _ޜ473j[ ,һ4oN:G0N3ct^'O͡okS47?f0g06?}_4;Ɔd)1BtYn jd|=H|5370w1ǭGB9K*Y0R`D$49t{PhORk}!gXH>eT"$"J7DQŜ(aHϞqػ NgzH_! Swc<,/,yZh-V7XHQR,WQY_DF|y`' U6.z%ZMCzb)}f]jK%{8w8 _Im5s9x"q<ԌW1knaX+3Tzg(ɂ1 LԱkeFJ;+d,?1"e Π2 &ePJ#3([;/ r[3bߐ Wyr5_8鞛>^;f lnWrAtmw ;?]mw^r>m`oqvw0̇A+|ύ=N7Nv;6}T/.=t-.m{5猒ɤ\j9t*ڣ-i 7=ZE\0}**s=ZEJj;ҔZ!{䮊?kt_ܕR~HDuW]ByUG]]}|ޣL'm6.hS7Ǩ;#E(70A]9+.+yEJDGo>RAzF vW/VeˤW+կ^QI&`fiWH߃uVIyGɰ}BD L)F0\|43Ä~ahee4 `otl_t1nH̡XqLH0pDydycyw7s]f6>āo-O33&7*2a56XoA Q[8x%3Fza|(Ca;jH!>^*{$r*|7"'F5~mXU\,P*2-.iAjK?Ori2s9Xbw SN]0(|G K_7 }c~qPL}A֌l++QjOU1"K" \ R,u!(\dsS3gefh3ϊGXc e,'z<_fJnMpY#t wi_]lkrv[;&5(;[QU 6`υ#Zۃ SNh|ԥtȚ2e6Q<eI[r6rI{-4px>jP'{r;hוSBeq"d:diT;Gs Etl58z*sS5S{kc`!TЍC( tfà6f:ж5ـ.Kd?wF-qW½ ε#{G8ΏNOƟ{aq3ाX/W奾V<^~HͅӕNT:<]+te*ty2OW<]+te`qX4J`<]S5UXSy2OW<]+teӕy2OЙS:.Y?@*r R~.p ȚJq+`d҂YS/U/]/V/eV!\q BTh3Q_cpD/9110@GcM9ak୍(b1w9`a!B}(W (tbȷZr e9|*G[.:9V0ɗ:/\ErڐWlRO3t\UDՇbmHytu8v [Q4GgD-x\tYݤ%z 4Gi0+<]qpƃ^W&7Ē,K`ыϮ }|;{/%Td0nڽD\[I('1u\ zԯCK z>V ! bBӻo!!HU OF]$sl''N@H@1d98Y1Ds]J,N :xz*ًzœg-a3e*g )W=-q5 -o֨k-Rʞiy0/'2uM%h,o:0s"AQOЌ? ԃX][:M^pNSM"`h4IJCԅggBwlDi_- úzk[i烓)j䀚S3wB'HmF&`>uZ Ư8Duvx:zD{PG#}q/?0Ks^DU 5 ) f|L.' \ew3"1)qƹLm kL%m0~5ܷ۫:mL_fr%Sjhsm_&>&}j ;'/i䭱*o"`6Gf2iNjEJ@Dk$A$G "(d-i>UƮ=cWJWMi}\jVv %nY8{/@9srr+%bs]+>E "mp&vѓ\\Nsval04e'JAQw5v3Zb(m|K t$@x-\rMR>W޷QvH}.)Қ^NjL\<8˒6BLh93  Xʔl9^n:(k*ؾF}N:M d]~,-bdt3ö+یČ\U8+[g}6pO>Ww[,-p.uaDFw_mk6DEp?ALˏQOw t.C]^+3Uuˊ5EJ.sh0KMk+|5u|·^)psݗr-bZxM KT8/[a{]-seVìyȺZtaêFqfxJ Ux˫>tb)S>^4v ~Mٶ Wl-D3Zr׷V=ݑǭ_qފw|=uЖ-zԺAOP{kbؘp;> o{R42IfGǁM.nb*-ME.IE_E_C(jZWNg%N:ȍ<}pgc~ZKP8*Yx`>.%|8mO_@}iqMqc#ڃE5n9Qw{<>]4wY;~_SbisxArNɥdoM)z`༷ 1iVg&rd.#J窳p{}}tINsmԺLPY̌mYhZKt j˩U)4lC] `($h7 )^kjB+DZf|L̯ˏIiF[mTO갥gvɺK)s{Z6Q,GKJ#us |R)2\bV|+_+<6Ͼg~I/6ռy7=YFd@ @vS=tPDm05@&*iЦ V?Pg7daY?{۶_!-~q$8moѴ(pQا-X\Qv^~gIaY_T,'9"y쌐Y1)f:$ rd-!\0G=6H-'ROc=}A4FHH j,$^2!YobA% 9Pg H\e{J)mRڋ-,jӪzN1XrvY>:ɝx2aL Ym9$gƩ\yrkϴ 2%ϔv-0y96_4<Ү^f]"c61M8%)񌧜 ǣV3TpQIz# לE+Zr ~sX܉-Fz< $c<@v2YE hbꀜJq`CwȣxYC]=<#)TD2'Tfi%әڀBD`ݝo'{(=և'$ "`zt7A kI H0@ZQu r4Aw/gsFNb6tB, U (jP [BzL {p%>s mͲ3w[2yEruVQI|Q8t.WVJE8,>]ÃSb CMB+%Q5>; Y7=)^$f{_C1MIDbjc[ŸrϥIbO0oVH(I=-"XP'a?lɳ&wml܂ָ(1¸>,4}|33=+7k+8f˜a‹<kլNN;Zʼn ,g 0 "2@ F-n`2T;Λ{0`iC6pS."].'װnCw`qb V8|Tc*tOhTjQ7_i2=Z5칬LM (L&K)2arЍ+dn̩iͳ+3UVyW86pIsiX^[kˊq,~ Cm$A[Gb|HmÐahfY>P++V1fbt=[r1|809ڗͣ&6j\%x1j-?f4Nĥ/K`1Klc>+L DVLfenxq 㷇?{?wo?}~|wF|+0 $d~?V+v1(rD4{3 T;^M +à z#(wRZS4_|w_VP1"Oa7XWp#(E k'o2cyE3D6kϰO$E u6hϨ d Hp0D|~NU 'Yf9*6M]/#3b7u>im%-4Uy\dKŽW7@ V!ktꝤ^Uo^ՁMɛWwp |<~KZ15g rxdˋ; ytQYj \`zWɨo"jaz Ux\H]&eܻ}fOfO{ZZԋ<_3[ E_]YÌ~/UO$kVw 0{Q, IQּqy٩.Ŝ+V<"xo6f`,A[ȿzOxZ#y=)a]9Xǹ`1X>\bKy$"kY"`%k 3B Q} I θϒ٣)?OZ1T)b+t@Hm 0J t<@O¡J~ؐM:(*)KG>T n>U,Z:tג}vڥ$h[]d/[#F+k^7k<Ʒ+ / ܌$yR٨7ȉR4gKr ,ӽ~f9l5ST֙=N#\ۦ$J#{@U\ȥ%Fw $2fS2JqTaxneZGEd&w9lVJHH!]J ˫\hIuJ'ӻO.KXw.}3N]FNBLZ4\3Y~oV /FdKOr&DzeY$/b;~T\QGؖ> U8l`$t_HqAIm HQ KJNUzfp(DlyaUdd*%%U">RԱb"M688RKXhk;"n,)y?7-כ6*=͖n&[ӏ_uӘ{x˭2 s= }xǫn,I8E}6Rv,_Gv&w/-pk)L@yM>jb w 02G+&Z0Wu[]zNy1h}l|įXoaHZX ZGafj4efҥ&oe,:u~IBKy,![nu OAǗI܊X/7)5$*8ʝ7\ 3f4j|sbaJ'FbgܾA.>U2d S=H0]onmY5֜FT-Y \ W4ځr:g\aGrF;)x-yz鎇 ]s>I9[jN?IW+޶n]L5&*h)( ۬L_+e0ȋܻ[\zѵf31ufmaU~U^uu]$%^MO7;I7[|@V i 탻OqUNWkqT$҂i[ y3K2鲝.+5fOG+uO-t̄g&ɾbp`jAJ.Nw ڵJ阴+Jӡt%a`Rp㑵clv6d<ڇxk5ST5^;Eh'2LE6 ;|F"2@ F-n`2T;Λ{0`iC6 -,ߝa݆B1p46Tn/!$Cբzmof1t#NGeB{j^$P|j%S q Í95yveF;Ჶ}.)Ysn> 8kKbm<Y1.~VoPa$hHm>,# Zq%t`*&l\g ].&P2y$Fm/FnjFI8e 1fmzgI֝2SȪ:ɬ /Nvtwo?cϏ?<Ԉo`<\ 'ZV+3ңOaXp={\figO&*2U?{8k5]TPU+]Ib1]rT"ܾ6=desb9dgk|!6<0*R#XFyjE{zPfK^6a-bXs86&΅+g !r  (8u\lJ6ўQ#*2l`8~=n 'ݴ/EtSߣry>v'g*;*L:EѫhDDS!M0gZw_x!31|1[%ENkmHEp܏j;`.A?c!eH=b&^Q۱3""֭:uN]FH{B#__d9_uA|jr ąHjR59\Ҙه٬35 \mɊ&r(1Ր$z-՚^[S5l ܙR`WSNdɮ#o\PB6bk:r c p+%F킄7R4e#t/֠3ds>+вi5p1h)#G`QKO6'n`wGML,՞2.H;.ܪQSѵ()G0{Zyp^=U5Xa4; m^㜌ˮ!Z6xDg-ٻ\Xkc=;o)z/kt`[׃tuG|Gg+k6CcNpP{ /l'~d36p~ZNq˳bȭ͊~8gś) w?I:ګed%y3.uo9ɭaksf`֓g'mǽpa F klЅ~|}cmm|ⵛu'N`ts_N~\Zy͇?۾/\=WO6m7o_]l?x9EffXz]$O'Ɠ(צ>?{{:֫m <=Ao2p̠k>m[/'L.$ 4+_c*;V^9:m>SG= :}r>DA޹ {5&!f*cϕ롗)V&՚W8o]:|as>}#ҩf|cs'7 ]^Jo.o =ˋ_ܳcNҘmܔW W,E.X;{U{VEhHbyJ")lZs\OH^L=8˜g=<>˾4tW_8վ4z]q-x lM>Ӌ1a|z2Ѷ9F,yŠJ}oVXzl1{O_]m9VdV#kթㅓw?qvLԝâouN_'_Z ܝjck5sD |6u0g" مi}jW}6|koRp$v%+EBզ\BF^cw=dg0fmd$$EqWy3T Se8^l\2YN3g_Ɣj]rG]M5?Mӛ#9>%Ztߘ1y^{?i}Uh/ _KG)\ZhoZ #I, ȱ!c3i3Ӿ IsP?EɋsǵBa =;E׳gGoΞ*!zcgٳ#j/ݺ:w/XXV̿\ K$nD}*7MI-;pN렵XuAخ%9|gp6M-j߄P[{Qkm_ j-`'o˺Π|w78z 6J -ۊiþܭ\c^E!`o-Kzqf-縷buؼ]s$bju %5AvzüJz pX(O(z{ja[{wQ6f}).5nN[sڵ*kuv ~w1(*nY8â yHF/?0e h?ؒ#Z7FTerس-{dGHOy|~[5Fk7} dyu_K)aL*$ncb直2dؓ QV_krdySc T7cǒrTV-u15r?{;-9۲If +}(n*ŒrA߁K@ej='Cܢ4Ri&;kc|R,\jv>Hgcsɖ}bRjthlZj{@,3F21vE4bv.!7>{Yo^rJ&H0} {"")xop˶ 쌱Zq>8%6G40.9sǣdCny Tsix*KH9p1xp>C(1  &p4vE*[6*uƖ :fDyJƜ8hKh?GC(@BEȪbŴ!Lk*ua9PJJOtr=ѴyF3emd'g;"l~j],d$>$g Hk=u/5 )y f,QdBP8`g}sўUI եfQ#u& eƫ`4R@_2VB$ LcI2j3l{u`Qn< \ y>t\veơ(/S@D)QnrÕ2^<\-@kBh#1d,qMYEW;5,(JscvƮ]sQeݍyz.tpuTK@ؒE!CLhcd BPli쌙Z 2zb튞-r| d[ T K[ `!S beBڙz ȂA yLN6tꁈ3fy]n tX,p$LM0 c46KQc 5R(pPR 3&$A% ec VcD!VZ!hT@D\ QQA(Mm;Qʺ@J @O߶R88Q<2/7ьoL)( ,RDF@By7Z*] 4ta6PZvmM4ID:SpWM `k/fĥ&NɃ ŘQb Z!`BDEN@ !|xa_NӍG .ۤ>RPB<ԡ*#h`1 4Ң lF 7೓)QfqTܘkM%A'Փ5AY`ʳ \~v*͈=X1h7$%<%*`rx)j=TFDs?5)QJjt fR$WbFfXuX[ơ0P|_ w؊PDlvḰµ(?uW +VJyP87dQƈ0SnQCB]`LcpA|dW*9Fgh pN/}AfCLm@24ԪV5g0vȾM@&8pP-!@?}Nb=E;vэ!M74&޸2 ֫ëx`Pڀ@xxXB!qjjSa2 iiP/zV&7z4SQ3.j|dNS6V05fSMx`U.8LD!꘍f=$\ Ey ]X~vQj=9a4F <B7И];c`'@lze?g7Kq}W%"uRYW snk>gz_evN$p.df;x%${frVKjɺYnҸwEXd@؂PK!aI{iS,cpO?i{1:Oڃ\ ~<诋]I%؃IOA\\"eQΑs1-UmKݣLNVOBLQ 7 MwbuTs&};dq?*EN*;(\bł{,?=zI(~:6Z,[a43=ypO詨\ n`c]ޟl>2zJd+ Cjq2d+ld%Fx:%jq7ka "{ -+ /p\\+Vqs3`A Jp 2%&|ޜ̀8-p |h _=k6Pn-r&n-ۿ[ŌzO]Z̝T5 <7Eʘ^!_l02wK3"efeJb7"Gx[1z-#խ86,ntŗ.moqd-](Y)%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]%!]q%R+O%6tX@-KO%%9v,%3CC\$x񠗷? ?6=9QEb%/IБ"R ,I(Zx78ޞs<(Gނ|[h6˷~^ց_JPBK&dR m$^qa)"CiaSdr~?DUm_[۶&n-xbC#+롥խ">)MW҂Լ`1H}F<# H+p33ʱ+;n!ٖ7(o{8^P)j~hM +DU6"F6@mj#PF6@mj#PF6@mj#PF6@mj#PF6@mj#PF6@mj)PR8n-kWB)_q7jjĔ&+= :਍,5vf'< uh2> ! J7!xG; bA!4"8%XZ=Hi4_{u*l}{sۃ_Ťr¢7I$l("|-EA3N PHbHVe2Nؤq.zۀmM݋jQ5S'Awuv+ּ̫&x6jk]G)}ڼ/;ڣμU׳UN赍]Z[ȭn$KW (wl+Z Ā|+ƍ;.t\^=tgy ޛUPEW{ռ(zG͍a4lovwԢ"E3E$<ܗpS>Gi;P1 ,ɄY\u2ldYZm=T)ai7*=^0j`{]EXB~,˩*;:{yx u a-(BuR¿·o &ӢzS?# B ;B0Jxv+}꥿t`9塧\V  Vz)|/hO6L hQAt_o.$cDAe)fI,RO@ Ti2  BDf"M,5g [KcL*fP1#MA%՗Jr^_EC' Teq]y0]hw1%JM&9d+)>]ȣ3W˓LDf0 n %.&J9㉨Tu.w^nj!VOlJ6{ Z$2hcP)+V){ŕT YJj0y&/t:X%V?u~n?f?o8 ]dv<{s ҄j&KM$?Bn57oZ _p۫q;|v==wܝ\unpqEXjMx\_qT[8 %WȸU@A˹5$2gl:%"xbr[)1Zi=Ȝ yn}.b0U=+E[;bkB5ة}寘{ "\g t񒧍+LAux=,t- ]ͽˡL}z]>(Qw8nҜJ[ljWڢOX}t?G?rVB~6d`B][RV nnn8v'x4Ugw3 hn qڭ6dӂ۫ưY"[/Ҩ݆iXvC]]V7ZDD|9\YYn6,4g6=]^6WNJw )v s?@Ȗf6|\*F18A/wL^ne =:sBWz5OO0L "e|e1c«'>HTO'һkb:<]Kg 0uj[?gXZq~-TڭA5欔TM"O=uˋfqªNRcB`uhUlMϴn|SQ8#T2մRp3u~wvK<at<0z_)U F8IJy"Z1a:g`5'5~vuϢKr`}c#^Ͱl__ Wrtj!U5rT ߸1eS^c ڳl/jMNM"ߧ&EᴒҲѪ-pʧLM/q1-3dp-fKfXA3&E%DYD$uR>hW`)\> !'f4j0 -;o= LeW9G,WcƖv롥aʧaKzv{'}yP}5zV &DZJnb!ȼ0ɴrKdsWtF+JK ۪y&41JKX.(5Μ6;LD 2|tA)oVZgό!ZX [J )Q3 ~QYX%3+Q0/_a)l  B鑝 gDkQJÔvd=!*:4UyA'TJMJ&Ccmh#vqIy]UQd:1m *7X{J%u4EH>E$sOu`g@qǨBƎFc7mkO`xeǍE ʋnՋЍ e3_h;s eи~ps3WD켙zӗoU&ߕu*|ڼc :}w?m]tYJKE0Zi^8 Ney3! UT}3U6BOwz?\& myHto$oM:CCͤMt.Ai:u E?kț7kxS3^v#$aLsu6svҢ7Q--JЪ}H$-Auw7B][$+}k5vAiwn~ =jw\펿TGg ;;5N> gal˶bjmYZmh[lff{YTUeBnNzG+6).ͤO!VI#g ]8l)IACDGq.(_'CN}_g/HNqЏ -$08>*`u~4+J@2%歞qt(ENcp0u#,B ,MX &͙5Ilsk4zpPׁi JJK+(2pкuԂ1)Dٻ8r$W23X#x 0vnFݍ2T*W,˃̬KꐔuI)\b`h"Qi4*G 0b'E2>9.h E#87q*N)}îrc⬞SA^WmRҿޗm|OۺuwȔa-.AUʕUMUp 2|~(ÍZ՛Q(J!$HNj MOkB2N 'h Jn)`-cc⬑`JitZqZ)!2Ђa:px=Õ;.PNw9kqٝ,[.$LP `Ii"^&[v)&fXRYﴠ..<*L!/YdK\Yqօ$uDem3-vs 1)!R(*,*^(nnb-DogZ;ڙəv3lbiZ^Alw\Iu1%9_Cj^TzQ0"^ ?%~ %0clw[>ølj;zOVJ;9~f.*p^TIQExϓ_7/4տc'u#x:|.9tV&5CEoF,.[NvI0>S[YN_돪odN^0bG6'ͯ:r9Щ0~vWGcɿ䍣6d 4y¦M/H Wj hni\c.o5?x3F(f|λs[nf};Wt{_.Gey7NBIoq>/z [#f^Y> ۫܋IGg<|>Z8 o4r[U Ego׏/Ϋ{qLE6i} ~H֕YR4yɪ*Y?:7?{xwSO^# (z\ - CMgZOqյ,~~:Ծ\:ZV+(?*|XL+~KL3ZF 8)uZ8 | ދ|/4G8W˃)_ZNKפG\U=53;2~(y)+Cv]l :P)!$ gZ9a$z2/JATmd5QTP 6括jACk ;Ql,#yM;ϝ^| x1\ ]LƕLDbiJT-\e*WP.ptT%+1 qVZ TmXň\8scc I2& DxZĜS)OMcYB7,A`g1e̱[F^X:kgu 5-9_qYYܲv\h_|:hgK\\ \RJάNVGClqf{eb{e^!usvHθs ]BҜJҩH BFqI@uf"H]Ou42QǸָDphi!;3a٘8q.|`4ǹ .{<.E^rT"޶ Uҍ-TT `fubZ#>VmԹIJ'QDĪ bi@I,!E<ۡvnu\ y48sgxis`Y e6qGQ&M(L`;^=ƹX]תΌ슪ǫU7Gs;rhTqef \ǧ%.t\K/;\Q ֞s7 d%'W{yVxuc:5j+\܋<]Kڼίu'͍9|E㻥olVMUsIus |y6CNN8+ϭwK!|퍤cL-oP 4KK!>RLʊJcFםEmK--┵c11117(\IFK0pE)8Jk 2%,y!6{/ur)\ІHG Θ L(B*c -V ,l҆CqKg[fh OnGw/-a9{_s!Sv=UecT`h|7I/|)rHKj逃Q18ge{j qFK3%$NcZG]$kDHVh'F8O,x@ ŵ@$(v!390О|GGM4gʁz߮G>_'Aus;gu6Fu\џ>5 8H&Ў0b="kROQB1{3vdz˙gۡөaM7}W]oi}輂dl& 8%ztk1I2-JtPU.ߔJ7ywoN▓M͇/,٘wݨ$[pEqǺ<&ơx߯B56Ǹ;AZR0IC21)zx\s#SMa^gա =ے:mޙZǩks, #F+垠\dq'$K'(%hsOa0DCJk}0 U\eiwsޚgh!+X*r0*͡+V1*KekpBDr8`3bULkR̭NwNa/\tWk\Vn8Ž7*9 ^n|,_y{1:m Jq1e V6F]yY /}8"`-ƱV2 O4Vr|UIf.[0V2(8e1{3()2+9#ô}hu@fVp8f:,1nQJMYkR9މGud$/Duɋdw1@IM&iCxJ3:5O[O~#O ;B)2G~S'R^pEN3c4 ͽD`F) х*p@? l`,>+8ݴx45gP`,.>BV>BҘ\=Cs޵?7n$JUCCUWvŮr{Is,ve/$AXjL7=3=aEZDW(rJhj:]%vtut)X7׺*խq-!tPV8vtu=t%(tqXŗHvs{ڧUBUҕZ6MtJj ]%5VUBY#iɞb &Z-'tPR5ҕĺ/Rɶo[[.JK};gSXn* [7Wx ,Zddq)uUe,:uO^VަI {D-ŗ78 Ō93˭i:F xF&V3P{9\ҚU -#M7vbMEY-b{~.0\\C+/; jؼEt3UBKe*T++%i]Uqk*%-tG=JYWHW V9XXW .mUB)YGWWHWB(y*V\7~"++) bm]%ŷUBٴ3;zZ;1ǯ٬&s$+/>FnOJϒ9;i;~l!ñnJgLl^:]ahEWT$tfyZJ4ElHpukZO(iGW_]]n}N;%1KĔ1F~J߭04}`q$?wZ}oM;%H@Ӫc+4It5n]%5trtJ( (MT SZCW .GmR Cٴm] ]QM8e-XUW iI;z㟡6YW 0kurJhh:]%Jwtutŵb]`xk*ŭZxJ(9 J"Y0B\BW tJ(qg]]#])$D-+LQ{*ŭLhicW %bWWIWUqI]c'9 ᥺=G$4uۢ#nK"SZc*&5mBIME@IL/T^"cztDeN}Zq!: eӲJwtut׃ORwMzFrO-h 6ǃ .[ l K'Z5ļc+di,- }8AspАs0>.? im&hFe YiԻ~|9&_4S2>L-r@&Se_w7* Bi344AR鄡ջ9;Ti Xm쎿!H|{ҏf|W%+^4mpOh u a+PYY2xy{1{#M{?n{}?bO. y@F@e ?,.rxL ;z'(QE]A(M/P%Cރ/؟&UJŠ|V!'RQ]:!"B\!)E+T"l|x?pȕIuMIsĐ uV}ekh?Mm5O0 w dBYI42OBozŒ*OҦ _6Kͩ\=CG:%i) |?/MJ=jql-Ty"THpQB~EPNؿ[Q魈,h-J#(Uha`kʹO29-_N/مe0k4GK "Dhg4i ˛U= p0FhH1JR H R%Yb)?>$57geQ3k@=i]4kq_xY"it@moclJrhXcBi[s,h4돧-#  |{5pݻwO Ӥ+}_#ȈR4cAWK2YS߿N?g5j!}Qҏ'a3&65 F-&7g;,%2 0w%ry<5ȫȏ@ c\59!-L-q?b.ǣyNvJ&&Ow#7Ξ-ʠw҅ !L(VFuq,( /R.k%D!%^N/mj_\g壓E?OAX}XA L˲ţ!P 4!gggaEp[~cYOY9gCɿ\]/ZU#I_˷ӏoyBwCx2;hͻ<8p v1qnI֠3(<_nK[6B1Bս7 ,9Vqf0HO>FRpto/)ݾA?}]&JuFTˢޏЉPt:΁Ek)=zaaYͪ[s2^p -BnG޹涷q.|Z"[6^^3?q+GsYl\ؼu^쯟Y[n-e` 0GM,GX5т?̨jgC73IŹt~gSw;53cpfv s!F ~ƅGi:ٱߩslNX:Iɹ ؙ,@X,7%> &w1F: lp6HF7g)Oc.*5X@U`SfhFEqmX,]#GzoJ\X (}z}Ri<Ӓih`嶮/ѧaK vښf/@ˉ4@[ŜU0:jr0’9v&ՅL*ɴg3xMUizgS}6VjuU'2A0ɵ r xB 1qV{tHbaJ;;D;k}UKǤ7X"f,MV*LzrrD/Jd'y@wU&ײm Ϯ>UY7)Qe G !)Z>H>5GuHC{׻_۔_bVf%Oҥ4dB9ZTVI*9՛,fϴ6|mʧ<>0Kz d -?)МK<&¬r´c&ZcMz;#zʟio4/=B{Gp2?χDᤜfoPle}BƓ\6gNa 4/_9RTe|5ט31QyQEyNjENRiN N:X2uVV+*fΧ5Nj^^?E؟~s8^ȵ%!)E%Lvcw(cJr+K@ZK$.7ӎnM۞D6^BvVJ`yq}jnZlj,& G5Rw'd\ ;vw'd4 l2 >H%B[4[Et:l#㽕/ zAVVCnzbŧ(~EY^ȳ$6LWО[N?,P ĺbXJg;pgWjсfP,O=Zsϥ,Yg4p ;wZO?tٚU" mW,M7R<` B'7?_^`MPL2ܤ /ťr))d&#QHYu2LwZ D佖豉hj4BZ"ja[~XXk 3H4AEF c$ uH* `&H (E.(q 7PNbXXHyٯ}bȝqLR8yܔu'ɱNhC֝ T27ݷ{Klʽ+ʋ51Bp|}-@je6z@u1. l(uJY&>h#6\bJ5$JCRʌB}DB))&@=)+S#PxϮRqr֥"elfŗ!Eɣ*?4@w׍FGm7A@XP1g&jZ36Fz͸<.l3Յa]Z]cC?!$pz79W;z/{WJh&)U420fD%$FtY$RiLTMdsΖ'mz9a&$,r!cdr4H&$Bx`#T4!awܗ2dL+ u-izHLdu4(&tV2dCVZIۥ=i ہPɦPe*jU$;ozkQm#E1.h0cO'-FHg!`J*<6Q0A~Fj+vkC fk(Ɏ}dkIV ӄS>]xx&  iA1" ᚳhT0ĥHXG"\JlB Oc*5(K)µzqyPcl2:fbb%W #_оHLz=]DuboS%rFsM6b=v2G;.`1Bu@Nq8Lx0pd;cqF,@h|AW醻94392 #)TD2'Tfi%1  Ay''G9+q?^}ě&ⱅu7A Y:\} VH`e3H+j$8eGaMϵܾiA *6( }_l9Hʅ”1 aB+H 67+bPw(*$cЅL̓*H(jP\zC88?g<,']8\ c0:ar0tc9|%g}X_wwUM\ 6zх{x*{j,b)JI[}Q/M>5ܝ) % fZOW}>6uIDдk}[e?\,$u1*5I5%F{ʽE$t&/Tܻ˅w}ָT()qSQgD&`L|''Kd eNӉi?>a@}Ɩx^(k~Kв][X YތKe >yсLY׽pf3Z9#A[wrY*Vc,#ᴺ?p!(X$qAVJsJ \x*\3w޼Wg|۳Wo>}~>}0;Qp/%?_6'jmWX8-~?v.݌nFO_S0; _%AeՏ?.Sѝa1>U)qXҕV -_9~Ss\o}15'VCיm]&!߼M#kE U1e'-rث & .yلa͹DZh΅+qHY Ӂy  (8u@u6hϨ d H6f0D|An'<3/đ呞|1V-u:/-Um+qM/ Ou)'>}λG¥tRYU4.,tu"JFc&Hc5oO*C ߸9Nƿ#[%. (^#cScA΂Q2v@26XM=oښx;Zw#%S 7I$=rE5`$te 3Ŝy < ,ZBoŽ-TT `u bZ8x!QQ a`2XPjY n%s-Lmݡ;@ݡ . $W8e)r\j-ba]Q[nrـʝݲnm<3f7p3!z]Su o^usiD$muP^VWj=Wc 0R%O˪a8{5NʬIrױG3W%K(9ΈyN+/cߠ晒aﯨov-9&<ѝHij|Mބ*E]|q^ݙ㢛-ʔHc(//[#Qrg˭6' \i<)Hϓ%W6g%Ю(6$ RtGA ƙARLZ}龍 w=k>Nc-U鰅AWQDTb,{ה' 2F$:}hbklJa^i]$#?g)Oc.*bM_l9l)gfe6Z{i6OFK`p/bcp)vybآGBG 4P۠gi0Ϫu)A$I@_|5[`\>8<5"/Óf*PtO` nF0Q;'}gЋëAxTq|~èhԎ9ybJ%Zae0Y^n3/i^` VAZhXl02w$.z-/[y2,/ƹedPkiNգ˜*?70ۢ ] zJB5sU%PM DM>pcqӿLHvRџO=\Θ+{ʡ wIYh(XlRGTRB`X#)1VldHiQP i67C5fΎa7^7Zڞo[0Mv'ϡ..`Dt}^Rl(R"uhz% J݇ (n |;]6;{K%\a.f:kx1~>m88o}׌5B{>ض#Һ5)' _| 9K*Y0R`D$/h[9ۗלk5W i1R URb 5 Kp3"'Ä)qEFI1SŜ(a>9;J $L} bb6nGu)|og=3%QoxƝۀH̀ 1QphR Riŭ6"5z{&;]uonb-&BUbo7 mk<})?/ X d64Ct/َoc 3wk@gx. Ai]Eǚu!6(/׽zh5Y@-!'QڣWWc\&Ud;( :aX\Qdrtgy0)y08t˿qL1%L(C~o ,?WzZ`a <9H^ lǡ+"bGus&J5šǫhR9KB %I] ع@?p$MuJQ47sbk36i=byNH#հ ]扒٤BcHb!Ût1}d"4ǜ'J2J|BX@0%Og=+SJԪJTά.ַRJ?!ul'rp!ueUV]}JB+@yڋy}b9h&%PLGa r8Mqtk~ z@PߟVjaJeۢ|Asy Շƃ[֬Le J߀=HAZQJ,#Y! ɴ1Ck:#4jy!y<c6jޚM ޕ4)+}:U׶9;Ak%V4@UvqA $bKfUYMH$_4gwP`@lu)ewS)K+̭ü,ey`$M$_Tt|.v)7moW2uY:'D״00#۽7CNNPpM7B"۷?܆^=cH>R\u#d)a7e%9L/V3}S+\h)sť^UveoFURj\1M5#wdP`v!#^U*K 4W\,01WY\C\[7WYՒ\='s%!܍⊻1WYZEn\e)5͕ލ1WYZ0nPJCh5Whpb+*~7* ^J͟ f)e=|DsAsOdP"ݍj}/ eFߺB)9\=Bb_=:wG=#+I}{`YgT1 Z?G&l!I$|PYm *f~mט{(m.qP1K+T|Hh$6 Mk46 Mk46 Mk4BM4~c(SD7a8:3_㿠@$B@$ND#L 'i"Xպ-,_0s0hOqv|ۧolS, * NB+2x- &F\ qP2Vǀ(FRRƹIR:1IPDR,#:j'?L+Mx*#n,'AOЧR/cm01԰ƣV_[;LjWV ._<t/L|'eGWHqWoJA̹m5'/ mkRg S\RH g(!Ċ3τ3EřW:,qf"ֆ$09)<8gHθENH)I褜Ժ Ař&YfE҂@7B4*QǸ ָ y d$SaYL8˷.irx Vga.؄DFG §_B7`4T+7Tz`PTz%*RD*֋l}V&B,D _4X텯0nEu\I ?b4\ ה Gi5LAHRO `Z8j_> HY0?-&(DYsZgƔ0m*T^Nrܯn59Vtt%tdvGp履KAK0Lתq[orC[~˵s>z^\/[XEd[^Rs˓շtuDFZoOv1ɟ]Rg>.[F7+z=ޜ?ŕVH"aR\ FSf8.g4!zƗ<}ڮ_?,ۗ;wY.rogJswy4-_A༷׾=pѮζÒĐB9 H3EXA]a2_UA*j/XgGw:\iT =bZ2zHx۴|/q҈P>PF5 78XC'N juHt2&&@`I(%$u.3+R1f[J{`j!ElZO~H;׻#a "?])`JGusѯJHku&A3al0A+lR]7)/}#P*6Tq gfjVVJJ<,i΀+8!6H"$+d$hS1s<B&Qp!s sTB{,ZdȷDLaXLS('f< *~W_Ѡ;߹7|OJ4zq2"IM!9HEMGQe!t+fݢO`b"Jo)'~-deU-^JDrnM=D0E dN4.p 8ADYMT!/൨*-J qs{/׋,9.swW[ ]Kkl YeYlu9.:E 0xH9&=Eǁi& Up[%vKrW!\dJF)i(̊ЫBXC#w&QSR !a?n hB l//~}ntL^%t7ko%}m_'WcsR\qjv:6I #Zԩha.TS}5 `SԳ$0]}I@Cdu%.Dx:2ع_.yz b˪TI9[1۫l*~kw<˿݋F:6nj=%-v_poU/wtojͬYzv++VU +PՉwky1u,CW ]>raspp! y6.t;-BۈZ[r[ҝ֬wsCsW7zDPֺ#=|76>.xժvo*SZ?SϿ m]Au;JہW P{֋w lZ7<TBMdaZڇAНW]#.)OPem%|:kĆ428eICw}2' ]aAmoh9{j+Ww&t nrI0V3QFRtIB*Fm5Ȥ 3-UPxCerBECѐKmB4[&H] EKFVT[|85x+-',ukw_,tp9+*h[hѭ̓5/xf~(h RIO!>.Q;S?m4[?ne&h 2:_%џ8|L$燕X(Gݢs<^ k|1L 8&.x뜡BD$$t@mB8έNiT v;]g͔ϿZ-i 0GPo4 Nk0=.&~sfr'ٯQV:S:u geJZX:}SqMѮ?ތ9' |^jm7] "THR[e$UAprycB4VscYҖ;M*jBrr $N!ǂRBAQ-R؄Fe,&;QUlaq-¶PT[x,E5f?gGb|peá/1yП>.aL9$PLl`;8+pJL 4DaIo"^C NrځτJhXJ[@RtaH<)D BJX #nXձOc(!J Q [! ZAD<ęj*|D]B:UMvNMk ;RV]YoG+ L0ƌ퉍I NB-$BKRvtϩ"EEꮮ:]}:UXQճXN8캘r*\і=y:L)F+>˝ D"](aDH0)AAM"eJ2mZݼP7U L+`ZKІit҆ 'Bktޏ (.`<| {ʬGӽn"nAYzZ|_]_woLB$X 5'*k&娯 ׏%"eJw悔,lTP-vz(gՄ&^m+<TӢ̓WLL={dFiۋUzivtέ,9WB!.Kuѻpe4v~=.~I ^NJ(!1z<}jt2On$Y6?8sR?%ul ޝq99B_G`tQB9aG\cPKy8r\hVzSߛ NHz!(+t9x g5eqTN(5dvǙ]ig{3kCON27ʬEEWSG<6}! \0<%[ Twީn]Ts,ۛE}ݼ}9sS{~Q-AǛqY!M'7U#)r$!tjzV3,Qh#̳]Aw;lUwY=uF]Zj:g0oFS1=B}Z+-(ɪSAtl6YegUP}x۳?gwg?E7o8chV M/pob3^-cWe\N-ŏ7Qw7)-zLbUd4qҏ pUyqg_ Te$ BP+[)zSB;J o)d>Xݵ3[fE}`/>&IpMJ% DG@Y6=Ji$SB}DuJCJ0Z|# 2/='N>UYl\x51a ND!k8ͬԱUĎ[zҳ+cVrHr)WV 3-W>˓yVUdX3qZ`/:OW~WU5RnOx}I8]#\ "X w`,rs>^5Uϟ?ZKqTr1+1ŨJb(YRZ!gZ?e]˫/59x!܏ZЃ+5![&uӻe7V鴼Զ9v:)~.O)MwQu~R 0PaT)xӧZ֛pNb/Q:hlJpSlǫ3x̜g*'|? w?Xyvb~_+{7vxY08(H&J$J`a4Dtx 6N 6Q%<'t"LY.LqƉOl7Ŝ/^L7d)`Jpx zp\]?ΗjWyT "ct8uU$XL dHx5E҃RW2.kp Yp(5 I!*(8tJTģP%9@=P;6zY=-/^^^,E^Ł2ac~E!B}ݧqK݇.F>}m~__+LJ{X>7lZ1m9]cXm< wXV|s~ۼ,=XeE2cٌ-|x,F]? hbg(:y#KCd,1h%^&4N.oYNt ϳB\8{C(Pf3_'/FD@g<$?󾷃v?§^`>_Q7azSwxzC' Y)Vcp<щ(OD!\5Eߠ{kZ^I Zgc , tH :0 2+JDB":$ё#hW\[/, 4tzb, 2ڰ39[_]zp{ 2Q9 F DBP\S!0dɶ+:J5y1p-ZswAZI9Ia W1L$3ԋ$Ϙ蜣8- I'#HGq.P/$*qRI.dKϼa:LKC$KAt)Hl[R\9 XL(oƤwy(w8ux MBʝG9=g>Hiedlv{ ;ok +<Ԧ(h,|ڡzAGW~G e V!qWϸèAZe-Ǟ8Rz{Mx4c{J-Dnk'A)JD!iBDudI_Clk39^p14iF H0RJN4iM>UkjISAi?qɚ-V * ":28l@}2jQqPܡP@\MYY$;y qZN=[zё妲1EY+i"KN8\ ٜ\U\OTٞ\O@4{Ƣ⤇62hJOgSv̽2>0]c<cp1spa<ۧC@ 1P ༷T,NH]rgW3֛~el5cI&fڂuL!Z()Ēbњ 4]2b "2fĢ[@x͸^kbBٻ,pNirٮP5'n;?ޖ绯(ܢϷQyr\<_F@rKh+68߿\ʛ5|"|Y>*dl#8x:)Vj^SՌc11qP@0H,:&%P*(%TRMkbZ/Itac.4 B½#GM17yfžfqt; ɯt\7q3L$pr"#{YV\*5Q' AIX6W$iXc7Ff8\n;vڒ6%iv6To$FF5ŀЧCIF0\9{gje2f %DiHM エKY<#@ &&4w oBV,ߴJZ+ig-a{ зo6b簋,˅rsJ[[D ? kgC]:x"=$:1! ?dׁ`ۉVRV2].;Џt&0EPte&bT&@H BE hA/SiӲf8B]DqV< " N$c  (.`< {T-raj7bLŭ[_mi%5[ M%˼]iҲၗ^F+u,Ҥ/@$1`ɥ(W4Cbj ԰ ~wy$CtPQ O)&ZG`XOuKLʭݙ^>a(m[¤-UiOF9r^#Ψ{ɮY(O5̺^BzZ|_]_woLB$X 5'*k&娯 ׏%ܖWU3eaoU?G!?pe&4adj_嗤Ҟe$boҬq#3O{7^[LSsn5ga\ !P{ߥ:]/{WFd /3 nlw1vc6<-B4I,IG( ȶXŪ/2##nM}Wѣ;=o;yQJKS[˕dv%e\5Wi]@"wYbNa>F TwW r.Brau^ Gh>\Ď)&צ~F G'mqBytqw&n&.BJYҳg-=eV--CyG9lϧ-jZuY7kϓM{M<6k ~ECܸTK@xӦݭy-g?W~*/]O>xȶF(|_i-7]Xku{_oG%Dun'v!$7X^VȕYT78iCJ={1S^u;ڣ5Yi#WuU[PtֱΑl]!cjeظ}{ 9~(5&aS5_޽~7/?{;߽i_FѣX)[l.wk4j黢 qٵˋnxNʍ4Stۙ_剌f5L|Yo[sT'UNkSte1ӰՖoJȹCUK6NZw讝n6{>H"B0[SB(;i1X\@2}^eF-t7>ѱW9-0d:ы(D>!u嬇Y<#rEecFl14{8U tbP}*BwNH;>o,]^Մ+&\ ?'_:(K?-> }t/$OQWh"uH Ǚ-$%LI$@kL;EZ=->vàeF 0L D[F 8)uZ8 |l j+=ݒшy@*b3pg^r$⻛?;|n\i@/iJyH, 8GRd9_4JAT9݀7"G[䝰G4;/GMZS.@6x[ Tt d?Ŗn~7'o$5T߄>G]5HI8LFKRD&"4%DMs)ϨP?BI@ ;@I-D tL B^ M%nMPEVmr!(\Xύ$!JʘT&H2⩣FksNSR戕0GD. #aycg6eyȱ[f^Xn*0j~O>y"!K-.cmo}T3 F)WT3Ql riAD IP09^;g$g\]BҜJҩH BWD$L:'H]Ou42QǸָDphi!ce$S<2q\>'vc,Dž .Y>.M 9nUގMUҍ U=UK`uJbz"jpvrZUԍOR:\""W  b ,j74YmCt9gR(ͣ+=3KØN^(c$;*2'l޸JiEwxw|w1{;:灬=m79McuwO'7x!~_͛F%9Lxr $5gf9mM=tҿ U8'I*[zno~ X9WĴ׽6\z8o~ϵoq?q~*rXig%RzR3p,墷qf]ϮN'+~PHSDsі؇זɭwOw[~#k3?ĆiYZ?*cG;>&F>d4h\Q ҄BV)aɃ2Y˵pAb"&Θ 9Υ/Sëk~.|{Z抈DUFJ,Zh@;ˆ8-M>c]dV,f\{~8t>@>Kn4|W-C%OD>TẼfCF).pU 8hMCC^SԻSu8lš"K]W \}܉.?5t7F]oFUVh -U)Hä!=<)&0p/3P/GkᝩMqΓuqaNJ`5ʁ669PP\ .9PN0,%or<(r u,0UWUVs,6 \=ART 5+E0e.p]u;SRJ+Fpq6 J͵,%mI•bbގڝ<np/UIqY%o;jWѲ(VMyc^.Tx1e V.F_>O5oi?O'ZHUgd]?~~|[QSEȁt}tSi>QPT!7]! ihazmh^W+: 4/K=D4m805?9\m%0S/m'T5'*ӰyXW;Tlag! d12}?[w&\Lro_^.D#"}+[h|kL# !-J@.XW4wn7TQ#F%W,u,2YJ~0ͨ؁< ]Ko_^rnVۆVNYr1?2:g8%cDNDX$ {ѣ㳷cZ藻#GxMI"NAH(#!dNI.2n}V#@ט@?J3,%oS~΍Fp66pSN;'O;hvvKgyF`˛Qқp‚ޔ^LzSǞޔg)Eӛ 55w[rnu݂֓ۜչG\Q-3mJeq3,QgfbɣiѤ@fbk/K_GlթXm<ii*Y˝5-=2V| ۠cTs 6Sdg2PF,7J5%Q T68۠Ւ䅓S=WxzRfK:@/BK$jp<ٰ٦xGg@RkUi8zeېjWOT۔0շ 墫V%, :k pZ)ɢRFy:%*E1Bchb)@\):RDycQ>i˶;Nu趩!= [ظ0@ta..@\ l:ngѫ7F W8{_%LGNweբEo}.tac0˞dF A_)[[ʛ݂>EHBj!UHFph[:ͭrnfI;5ժUɨN1&IS F꘬gA4)!BTTqG.j[26͒q=J9,l2veY(:YM_6T7K,/$ԡՋ_5`t;O߸)B D dJk x؀$`Us[$p"#{Y\*5Q'4A$,ʼnmZحMga\0.EjF;JmIەڒtRC(>))B7)b"eUˢR1HD@0I]FR2"C*Њ4ʚ$(e^p>&I@k;t6Ҩm3ȂX$bkDl9Zv6}'[VPd -l)8I`)PP`B@# ͝"SP7^㴎ӎ^4)6l쯊"jE;-D%iA '5fbWY 9L3x+87_d/)ETM޾|p7 ~IfT}?>c7cxt[_~i{,9iT "gG)<%tuoLysV%#N e^Z8JVqhaRsJy,͒Q`JPK:+~.~[K)hg^nQafpiԚ:\h\8AxgKP)ZE֦erΏh\y&CՊ+_ݴ o"WM 6~.ZnPλ&i P] Ng/D` F#%HD˔dڔ_u IN\\sGV*0%OAhô:iCX!5R4®x{,?{CcVuhfjbLŭےmiP%5[ U%\@iґAr7 ?jچ(pUET0yNP/I hDr4BXl*BZf]Wzg(H2De( Q |ub2uT1 / $Y'HZ:f JNض~hbv >T}}$G)bԛ/7Τ_ukSnwQ岣.v?Sd VCje5Fr 7g+Sӫ*:sS3QA[jzI}pU&4aF"QIT~m֬q#3{7A[LOhAPPB]ݖAH0ſ:L]e4v zZ<9vS_[F? gl<Ücq/ 3uL,bW=;'2M${& 5haPN59hRNdW~pw2V:fH.l ySԁ D;nurγXᴤ>o#k\IN3_R3-v"%`I{V5=1GS'<+61~2瘍qIÃg[U?KfzY_xԜOAZ[nvW Gßndy7_ BFRq$!Wt4 kƪa#2mT:y34 w܏ٿO< Qg4jӳjH8k!w}3Aᘊ,6N * $$NYөYetVחHAqt?|wph?C\)'SO¯Oૅji ri9Ttǻa? 9nXƥs8^*328  'ŬB9.*2T]Yc)Ys᭄;7ȡ]k͡zq6lثd 5qZFD0ޓ@( eSAF2!t6s;f¯Wϟ`53yd=' +RZl\x51 ND!v1vb8GU5qWb\]dݽO}>B֧Y.=tN\ʃdW +']6\o)IHАX )V=CO\1J`RGD-iPIѿoC)+ UF6Id1 H)"@Է[$$5*XӖk녥!R"1DҀJhhۚ]jz4臻#M )*((aK'S%e  aȺ+:J<ں !@j/kqp V8Ƹ)E`"\T&yƌ@p<($p'֥Ÿd 2+Wi5Ʌs&%!ÌN% VG{YWǚ#_9ɔ `LzGg|S$Dy3s41ǎPګ>c+ 5vuڢuw`NN9s]ݴh#:řϼlF({$7pZ\wp5V/:?wX{?H%\>:y Rʹ_ *9t|Wz_-L˃5)\C[ǙC߀4:ٝ-ޝ-~g'\L 뙗"][xENHS褼|݂69f#5g<#jD=2gZ8BƝɞA~YCGD"S|R=,3===UuuQ8!('ri F *ƙ g;ά)=էgg)sf'dq]vȌގ Uo[z`fubZBxUrԁ {_.MT,V+ĢH@*$VXQGb`t;u;9s%`sU`e3)pa =U>JKLT Ie~Gh>3얮k-{e rF>([sft?!e֦(uEzsіNm$ͫYv- l9){Qϳ;O\0 67ٖyߩ<{֝o8:RԥK|,zCΔ4'ӟ45FhuئXlsL#ϑm)QZnmB&J2<_PHˬ6R^SLI"; F攵 } o/fgּ}*VadhdJXMTxJkh dJX Bf`qc( Dh,@hC.hFEL1h R9o|icaeI }%:`V&Xeԭ0vCGG&5ŸGdWRlS-:z)t}MDWS9+WR9 GK =YyYh.u}\C~ hYYh8Zl&RpCUhT0bK10!#)16ldn(*=2JB>OeƌU:y=vɧ_HwG^n˽qLыJɉE/W]_PG(su:O"R]ӮշG?_\/dbӹ}}I7o{f)}ګ8S ce/cJB9Z(+? -HRpۅ!/p @ +Ɠ6r&r)ˋ{{}$ I||}2IG@5,;*GV/8.E$\[R%㝱QzPe#՗68%R36 E7ʷ3)uZ<_Am &SK"S=ôQhcYuHmSQ7I5? .bf;T.wF"TseBLPfB0n# $M"eJ2m"$:^(Bnk L LHޢE ֒'4 eICX!5R4®x{*?{ }JfQ/ͬ?SiãB*i#H(1,I5[ U%K)*ҒZj?jڅ(pUET0g$4"Tr!, @H 3>NPĕY); AYG)B?~ hU`=I V;A;A^osBY۶턛*?w1n}]gT욹2n"nAU~ot>||x_zj,UZYeѤ|w׏kx(onJTPZ;^sOy߱jjBFfS$/n/OJSIRJfe߀iۋ{i-p^Kw_"Q`(;jw:7ht[\q_ԉ3[%?G:ӧFkd(F+eukv~~(dkNGYΝbW͝y&CsS3jкF([rvѨ<n|l4Fvs=[&#}(*R@-ԁ D;n|u놋r.X>Ok\IeL e򳽛oB߾H(jz.MMPtjCb$6.RxXSݺS5~coտVw>7ߍ Hl07n?Wr;[9_?t7.%$NM}t5lkZybL?(`^ dGwvo d22I{뺹j쭕H~q. Cn1u8g(Oͦ%!Yu*ȆN*l:{{G?|tŇp{C*A>{>G k jiri9}wr@dK3yp7ȨV?MU ;ꄺrT]ጚeA 1^sM 1w,_!o=eysb9gg{f}6LD;nRB({h4e>"l{"H<&:sa"}<%ă Hy 9!!P9#qM'i(v= j& ߗo"y9;9'DpIp Wzp%C5 WYE 0 /! o*x ?rżP:&r隟UMMxk8LYa"4HI:$@JAxf+ $%hW\[/, !\NOPE#DF} g;蛘W^7@e^""1\N] y+s{թoǂdxF ߁ԡ%4 (w ,6 ML!T{`m@ mMA;܁5uٷ.b^T>L|0/yu^I7wo*^nx}K] ey}wt 7\AF[nq*r[:J0޲7o+Oj^(ᅬ?ExoMyVSxo"OI8,ޛE5\ʆZsu|]ש:[,:yCMvnU-;UI&cZ4Yx:ྛ#j_oCmtƅ+\:/]WBqN@G>_x](Qir*yځ8^+v|yIO|RA&TYMK*zg#528eQo[L.&R5zۘvxWB7!n|U%b/{xzDjfĀ#)DsZkVL csɚM, ,EAbu3`epd3 w-mJWYTL ō{,)#'$R|_R-R ULd*hFU3[ gYdjl&eӞӵKknmFy6L#0yc8E~0Už1q~& OtNg4HR_ܥu;\R=Kb=~ɼZv}iVNxnV[J-~&xs6X $]ε)L+DJɆ.I`[{ؠFOk'_ eVdv[:9ڮ;;y _}%zVUy>>OxQ~pb>XO ߮fmzwïûs:ۂOv;pOO0}0ki~ӚhB^/=:2u4R>{\4펄]w)߲ԃنSmCdP5`, Д1VwP0Xd z+_V[Z(Ĕ"L)'VJV 7A"*6Tކb8{(=ԃe`1_QE{= 9?뻬9~۩w/<]N>9UN)<L:8`Ѩzm 7&;Godx~7 A/qPYo{~s&~Sv4)>~N-I C@-WdGtYa^"meܫ\ܧ O}rR$d-` !hrc,"TW_p,!J5IP7]i d2HGD,BpvcDݒ~'hl1wU} *_zDwAࣇ炘VkuE1OPͣUq .RVK/QG /cXS4gS+x29#w믇QZtt*v BK*VrO*} QT@OP E+JFY i)J`!!^q+$U)*OAf袤LAiLޅ0(x"NJdtiL g_$2#Óo ~_^^"ůu]71Vyjf}od30eL VpdEbL0E*`8¨L6Bajzӟ\ [l-rMTND㔢5( 4E҄Mq}BSz4i3MܦۥhN+1"2*Kuhݻ{_k_i'byܺYwtf;e܆Os!Bjjn=W=?7|s}_>j-nϜKw UJz\YTgT` COdw~%: KCtSD/B|P(o0Qp= ϫ)rmL F 3x(#(hkzo(Qʑ ( U YM4V;xK[o̼cr˶ژ#A2*L $(Š?z|k:kя̡죟`t ] q?>`6= _q?eUnt7~; .Y:ve"%;b|JR/ACAz-= b"mhS& Y&Fz)`q١HVtNmC{(M,y$REW ,P?5ʃʞt<2}1m6=cZFǎWz2{3 "uZ^͵99/suk.f>2yEzRf)<вI:4SOkc\ սOە#KN#Gu5@lX,Sνo98R|1۞ה>dDiUau-MH I.9ŽQ(t!i1Lۇɽڞuu9 V\ :\}5ruh/]BҪ^BJYGdU /^kgkwevMjnl`@e%p]ຢP&&f7bUtk6ـEG9Y1T:w26]D(:8! j\6:_ϸw9at gYwK/%i^r=V$bYk] 9A'x.u@t$\!'t`/AQO~ T ]HhW =A:y(J˜ep\ dj=02j8cNK1 fhvۢUjZT Q% *]T0*ZvJj6(zpP8Gq {T%X 5cF jta FI60pI.|,X_Ø|MnVWbbUA\[.éNEe/ O.aFrЄT6Һ 'ЪȄqMm=*7FEM|zM =0"LmrmZK)/pJ;YX6뒽u~}"OOGH-1H| \ 9i qc=Q}=ɤϝVL;zm j cBa]06E]Ie[tՙfՙIkS5vr 4ʟO?j\mL'ǃ|x%3 RJd2d-WV$+y%"Xv3D*KB$΃G!Ef` iQ OL#m5gf<M*%p?f) ,fBrYO&dSH xX>`57FCg_γ. z"?nx󡧾?iڜ꓄lz$\kJCQ+={ٓ;+k%$8W9rU)"ME Jʄ*(Ա"J '#Fe:Cc۔?u]$g<~]|];dT][U&ud7siv2,8¦A)H3Kə "'9x'SAJ;CmkWsV\[pN./ΌsN5+.ơDu m:\N.yrȌP+h0 Q2xɱ"*[Aii@|CJJ(1Ɩ g3ޘPs&gW]욺[It|UGWi`[_vhrZkSBKXuN+>"?MI=[hIO3Tk@a[)D6HK+B8ex6^-ښRY剟l86ZeRv'L I@IIH-[kQʋ(cWYh[O^^pQnz^}V~hw)*'W82_MA2%g j62`,qŵΦ95CD"A e2fZŠ,ig"LEl mJX-.LC銓Mfؖ%vk,hn("[vbڰC4fL e6y阀3 'kh2SRsVaVI2!CEGdM*y8r5G,)%Z4mֆYJ\"[]%bTwa,KIj+*)eMZ\2"A+Q#HJ.Fx2r ۰گlMuUs0ņjOkUoOqO9ྐS'Wy'3-E> OG 'q|.ghS9qk:tҪݮF_|R羨p{ͫ׍3ҫf:ACS=8 kHϖqm/mYŲ|7wZgߨvzNJ5uAAlNX)u<- J+筭d h}T*4@e8޴˼=Ut͡J+UQh @LKxRvM{a-@d2$CHsI3D2fSLB m]5͚] i>C+QEa1-7Kݞlo_ܚ6RGvEx8v(sqѶ!PGF[NuhM5 ۬҃.[[H 'hkzYvջ.~t*R-JIzQ $Ԯt"$ozߦE }n"U[oÓrZ?kpr{؅h`1w`?3`0~*n}ۀnöCZ/SOnmO_B ` veD/ 4~CRcemf*[@1J9 KR; wؠU`UGTg$kL)I%HE1;(u/Uת $(x|-Zf3bHox#Ѓj]gt1B8_[ss[&|"4$U I:BèyInDž1M RNecVw޾l6yvE&uvLɷP-f oJnVۺ~zCJ-A)ymdl"Ȫ,PY:I* t&.Ծdu{]EVw:1R2AhTFZRkC[ eWbs` U G<& ~e :ˬ%!Y"pژyr mp6cD- H`R׶F^K D Լ>:BSIkĬ%.mq2_5h3*q1ng:D͘&JP!ʘCIB f: >ſrg.(-lj9$5(grD!шFg9;.v2xNsE[hٚ,3YubsA49ЈuߵXРe~P];D W˜ډǎ @~q}^;w 8w'qەs'Jq.H^\ GJ^k/F^[Kg//ޭd?ʋFyqfkP@M'Cs hrJj8]lVX 24Wkޔer3Q-sT|V-:1)'&e+ TDp-E_lsm(DNAIM~O><qLiǴ蚜]6ƠMuHa{y9Z&PSOy^@[yj.+sGܹ<>d޷Nfyz.>֑~X{5wrj:7UD!O{3f}4 x_ UU X54Ϋ{'G铳>1d9ۂ<4gj8T596Zq +}b&W$B9``*P*56E&FrՈe·(Ʀy`89{(Rj:H5W_mfmՓ?s)xt揧NMͲd*|.f!}Ȁ`L5:mPVLV:WKuMwFu/`.} ٫;;~%reUo뉸:w=? Q@;(„J T\PኼG1C1Usrx*j3Е<"8mrʒAy }nac.~ °,vu,xJ|4X{Wd1axcLA- }I#!ӑ—󙯠Ó}H[f)̉(@[}8_%ܼ{owyru{u?V,~-ڽW-?oofթyu3QXָ߀u:|_o5w򞛟28mϙki{n~Z9+Ϋ~ƞ_}V;•b襄n=\u+=5+pWo7ﷷ7_O?ժ?mE_3 I%GyErf#kf *Z\e##QT_9l>)b]_hs*9aߥr3; sހ^\XH4T)0PpfN& jEh50; @2z-Hłakmw Z=rJxe Y}N{Yտ>r|Q-&G/DϿ <9/D"QHxWKa;zˀs}djd/4Ջb+/DɅtS+H1}>ų(WF叓*o#:C=x)WO41 jCpiO jt*9 O`ϤXi(8v Vq`'9̽41Z<6#9`};̠4HYoy]KC6m:SuOvA}?cg7<^+*J >% 蚽喪u!@ u%8yq5z{!,ӳ[ rnW O%?@v77J`cŔXusn-Xu+WXb匕_cjHG iƪUsp3jhZ3au~cwX;>Νfӂ;e,6 @;-4K@Ahж1"'i6,zS%B^$j{lֶ}eb?[`^fq/f_Wh3G?|4+6+٧烊r6RTJqT`=C9*0PTXdBb85gx+贁FVRӿ Ұ ܢbFJT fΆ*fuX"GZk٣~_ݻ7q6BWldžS>0QČxDxձ}(9ubbڼO^XVtB<1:C4*@xϖul-(c`&6O ciPjzl4%2\kI`\*9t)qE)ɇd0u0sP\D ͧA_;H7qKl_[RʹM/Kqr= g.ޅt;L~~Y*t:#{OͽE3<7_|MgWUݟofh rjV 0DN sC2Z}^~ɾ2Z<#FC1aߙ5%K$P.Sn䨁c1XWcTJ-:ׅVmVI wHC0Bo8% )RV_Eشº${0s$wx!%Y,=c]z>kF_=U_1dh5,+aC {Cnp}sۯi"9Ut-6Hs X7]k Zך ŏoץPpMo y*Q,tֳߧK}-rVN?3gXN3ݑs'J-?P@@s!zA-f*\jZNAe"h7ּ -Z7e|~Ȩ9e>h?HÔϓ ܲD-~bt\Er &7~%) 9Q'bJ7=1-Ӣkrv4Q@0Cԅ*"醡zk\@C3Zh#DqzU[oi獫;Y{G;Mp>>`q,nmM?$TKٲ _pϘ,}U$TU%pb 8j ^KDl ΒМRdRHRDJظ2k2\( XB\uSo#U# ߢ1`bV헯/@S{~j3oԼ=ܬK{73<-uhj}7&'{.Vs1 Dt&cQvhӀ:f{ZwkǼ5{sKvKo:n@grU^e'$ vt1\KzGW(%JhlxʺG|!Jz5UP  Jg*Pq?Tbz*fЕ<ʉPYK.+;X2pд܎Z0}n`}#^-r'rnK} °I!ֱl)9фVc []illfG1ŖjApٗ9Rb?I.|9 :|g[f3.'ʸF+-|~;.#ooWjEahջ`@fY%LEdOl^Fօum(%X=dVݰD6%.PtkA9k1Sp )!e{)+IGLF/-fsiY?n~z'zN9pa} B?r JZR@)2*q(Jɾ{T0LK_38d$kΡE j'@`Bwe>ǢC.7s*_+:.[CmۍFv_tmJz `)RUB-IHp]0f+UYQWBNQ0_ 0^K`Z JSv8Jڬkj) ;!ǚvci/ߤ)E<+4$1 ]ܖTWrt[ǕKk 86"qm֮2QS:>5՛9BԬ1؊kLx jks]sj8x@H8Y )Iۡ ܏<)\\>ODmV0yI$Z1AUߠ21~&o&ٿQ&U8 V+6qoVSG*]_{r[zbbσs22ӀDTAH z~7zh5Q~9:ؠxe (S>+S*֎XC bkh ER>=uzSbW %v[JԲBۦX/yJn@&gD\,㹦Q.oo=4˦(0RB :$wlu}:1ĎߊGv>-+0>Rӳ!03[)>dESY w!k%ַ/bУ.X xo'nm^TJK+|[r~byݮ5Jkz~XZnx5'0Cʒ$U!caLtX$CIl)b)XyY%x/2/nՔ=[9IkHZ@Z^cX ,dV v 03GRkw7O º K#u*rFyѠτW"yeNE)U2#$?=iARFAf(&9R*JV4#(rb#R,WtԱ me2"0܉MBAS.&g-%Q(˝qc7yX@T@/VaCG՝5/~GqU篚uF4{y}1>o M]n~H_zo4K?osV4 DMZE9=x?޶swcдR <,y|q ^'ClrOkZXccg-ƫin[lAWkB}qKS ]wCT4'zg3s"LmuhY?[Õ~2gٴZvsNEx~Il6cW/8v%EQuWB%8h٨hDg`4UN}roIA3T#$- o"$XH3r5 O8 0RvlwX"4+CD^OУ;v|:S~d \9PuA]\:3nܰfx5ӽ7?j|2v^Js I;7<r5n-lrOiZ 'VqyOgfe76Ytx,V|xz=Miz Wj-ڮS/痩6}5bp, pxn6-dۨw4j|]ƦkV/ .NptTsO>?~߽a5O<cJ5]J$9_MwpëkUC? Q"[FO_S6Uj~g8k F4i/ T9%-x+ƌV~Sa;7rT7} bޜڡ߉MIgTPȒ(dG9Jfb#+6G>$$ &BDej ӂ- f+K4$2C6ze}RQ2s!tFA:V:#g75.OzG g^퉴,Q &]9l(,RwܾVA;"AJer)ђ\ZIŚ[JY9g ZTغ4S>[. Fm@bWB>(P—D,tPvPhߒʭjW Ҹ(76QW@: =[;h[3/rgD@kgI:ǭz~gй J^ ߯_WwaT /tv~y G'O1.a1<2иQnxvDcQ{rS1e^Y "g}QI|ı=Pwĸ)4Iμ϶S|$ϦqXwS&hw3ޯۯ^oMi5kwWQ GA?~L.;ZI}R8BrKjł,qM`Da, y` )yL_"A,.IW(Rb2*P$Bۙ]^Y$[غٽg+_/.i qaWaOS(TήNagni G܁AYnj6#jגgϩ}EuPJ G2(1( T7)`4A/{ 7ˍ͎̐!n7NYԞlgw[@dH'\tjiis 1zКج9{66T +*xFiV֫Y;% Ʒ텷7s[Yt |aȒ))E@f䊖c%"ٓVi N116BQ|",NW:&V"YnTPDkK^Dce_Ju] ͹] O͘v$cp|y_8&ҪEjO̓flRgik4-5ټGS^QShQy%zĤ3 JEԽU>p7iO"u#vgGx>s宠vgܱ%j#t(`)& !r%;r@¤ Rx4ZE#5 @mBSxXؐCl!f2ItɰH51ɝ#xӊow;m*X&!N3! jbJY֥>! ! xLu%ٖ-+n; BæbfrJřLȥĖV :I6<;FY+ iqTǒ=촘o/pzbhLg RmTg7#w\Q[bjTiʙ9Ō2Fg3ZUČVjzhҊ} R|3J1* \ .U^"\!-\2(+&̳+EyvJp꧅Woѩ)c੦8N8ի\>CˀMl1#&P/7UD2={P尙_3svW/X'. 2GyǤ5hԑݸa4h'&+坻 :vJzܤBBn  ab˄gh۽v4qWNbɚEJܡ9b , #7T=1-e&0l5{ry>5/Ƚ7>?m~?*z lR]!0$:BL(Q:(`1.@tyMspg_$b -Pۖ>|c 31%W#K$bJL"JfT=@2zsh!jpDO!Y(#$J!,SV& XMH֎]5ـR ގ5LY[*"3CVz Y6d("r1"WR&ttImKu{j*^*1(:˂$Ț t !,Z=dڀ֑ 0k,R`nf`2FM6al:$blMp403kg0bȐ-3Q 1 ,B(If! *Fɐא`s1h*T!lIؠ1L_>T*{8'[o HX3azK> h|qpօ2v`#syd1q1j )5Rl61~s\ITJ`†i)P8' Ey'1 rC-Y| fe%x%vi-KcvQc),O`@u.&rZdkmȲ0.ma`>, ,&$_v'3""Z &EҶZŎnqT. ƌ6Z(b&A  0jrƫ@ Ȏ  wH6dyE``"_F EcC`V+0`- iήY@EE Zà0N44/Uh;#˰pZ(Ϥ5  *D[SH"QTcmʤ7)@%l0Ʋ[2mƍ1uA1Vx XNUgdA D*W`P4WL% S;0$  .1T܏ d>rR1;xUƶ"Ta. Wj,pLr W3dc@ȳBnTzCJC+q2F0[@( `%eGdBEhA*nF)K<";[c :c@ E&rZ4* 僕EJAL*>|OA `|QHVǃVH "3.$mKwEUgGUSY_<9QfdsC <}6wŹFm娃Y.1eB;G DYZ;`wըz֍M` f+@1\ƒr:Hb 94R&DyF2aN؞PʢT(:#>j zր U @2!s6,ѳdD).XZ#eȍL +R5AYJ>9ЉXm&-ՔqOk-k$Gw\Mo Z TfQU9Z2 G`93#aвf@2\O#4%0`*,j5fC1IpU.uAC,#ԑYЬ"" %;)rjb4&*? tjOW$"X0B R GR E]r}`ϸuԸKAN:4_ް٣-S(/;t% ;.-Ɨݯn[<\jn-A%`?!P }6P+Odކ B>|@"D> |@"D> |@"D> |@"D> |@"D> |@"L}@tz=l<P+&jh?uPC%H y@"D> |@"D> |@"D> |@"D> |@"D> |@"D>g =%k3;:s> |@ D> |@"D> |@"D> |@"D> |@"D> |@"D> r Hkt|@d|@Qta}@s||@"D> |@"D> |@"D> |@"D> |@"D> |@"D>gxa iE[j_\_o;WU^axRbJ%v~:%%㓱-8}P 1-mKw3{z~]gqk˖z_6)1 n:E^UMĽlgKݦ:R5A!v?_ϏiV_<@fK[Rt{7n}uyQ@/{xmܸ3~[F'%M7M] O/tE  WSwU4b^R/]'l5O1WdV.9'Prb>!fB']6OE*6;/I?Tz!<~كoKf{xmi][}=`⡟t!ԦD'ԌZfr ~᫗zьּ@l;.:fL\%^))bE6-SoDr\qh{> 'v >5C[]5O\BW NW WDWϐ m'CW ]{=JAtJf]𗨮Ƣ+ՌMZcOJ/!])vI]5v:Z٩UC)3+5g|Btɠur*t JN9ҕ)7!`ϧS 6BMϽ:tPZFt 1ܔ )\BW > DWϑ j?.sYwf/ҋӾ77e}r/{[o>yh%Enzۿ:)eT!׮MjNB)U,% ]w:aʲv;)T&k*>j-]2ំCiѕ9 Ճ޴#Nw23ܼ kQ.75!`WɩtCkOJ/!M .ʟ>]5O8MVRpHWZ9?<9wWXvy//.>?\]M=I.hVTT]bGz,L{XL}u_XBռ[>;QZdr6˸PB ^Up6.k:/ پspO EMaKlzp6^ۗV.a7]baX6#|~Sai/||(y\ V|%n`mznݥ^B;HSCzvJLsWyaTm{nd#F 9v=ܵe 5X3,J9])2 }Bd#%8)ǫBQAd]CEK* 2XJZ-WFK <;_~ +^qS=;k}iq:Vh'3%5$ S٘b <Ԡ~k6$F|ݡZ ֏m#t]s_.@nLbuf>{syQwll<}ͻQACWy; jgG?Q&Idb<䈉'=}b\?z>dO˪(oy'1uT=~9sD}g<|qdEpzX*|C47y4s4SD"DS1#1x@Iۓ_g\9"{ A5$gT{ K{[M~؛?]iy}{BevlgbZ^|ϥWfFy0uo~\X[ ȋh!g)9y%4{|7n>yt6\;[tU֫WR )@1.>^V"xUH"]o|UW{ QuKԽR)IT w[Ra6\`[uz.|PFN8';Xbc&< G(uAdCz;{丫g%Ja6X54C`FX Fd06*Wn!5Rꐤ%u)\: x9ae'FɡYnRtw+D96v|i􏃇#LC7 WB'A>O$}'pqt  ֛S_,a gYr/-#Uj˷v>cnowaQ:|B,uV_Fۤ7$:ݶPƔ*Xwem$I~Y)y0vuc e !H0EjInbFV)J%(lAYɬ/G0u-w;P^r.Er:D6RXUZȬyC,)sL.P"D%mT( ,!:j@o%rLGf̵ҶFΎ Jc7]hHWnbW]e;/=]\Mt[,CZtEB ))%Ju.ed}    S"{}l;.9ۅxau %.Xu]f0X)ɠ>]+K$>.K~愌:v8]KmI׺Z;-P3ڤ; |Kz O*{>ٵZz&|m~|p[s7䂱;}q^쿾O]/ >&Ts#*9Lى r|0LIJFަ6x5pv8ޙũᛝژ1ʠ=0(Ξ 8e?jJ-rT>oʂ}Ug"ѳ6e{ Gz vG 8QBLֶRq%d:g,ssȜRy e-}GSoIOV<IXCf޿s+Gbe[6vGSL=RgZUӔ\,]$UR_YGwLhEwXxVZQ߽!N^dXEAYd AxѠL^eKypArd22Z*%UXEv,J)u)D}2 l:CmFV綨%%͈$b"!d 2Zl 24< Qi*ጉAy`GD8 "^c`#0|/༗ *$[Ab$!F+ksUUC@YJWY)Ybe54b4MjK˴}jD'"he|[ f@萝@͍^y <x2H]-9"Ϸ,uv6n=&ύ &4 Ѽ%њjFmgE-ј63ܿi&vZMQ pܰӮ1d"ڰwTHFxf\$n$̎/=I0p"vi5A|{-Zؠ&3 bd.Ib̉;,h{t=)}!dQ& ƹ䎠 ?UW%{6JTFo.)W>ھ%'K sx ;i'q>w=e|zV!^:*'TL XOJ[-irZ2١L<8P9hcc0vwC KAj{C|_G7G,Vn5FmSk^˝5)ŔDI@nOIL"9S\ؤRB:dтt<1@ΐE$/:}(9}P},#MQ.+&˕/mDg $= j E ][%=T+"=}b1s ./j .!XDYf R!8TcFO;fTiA(ԑ{xѢA!8XXDJІKQZ}dW|w,[fvsBAFSI:%%#Fte˫`00,EX8;){2 {p`4-@7aO 篦N;fwcKӏK*Rso5nZê7UKU-5qɾwܹ<'|!_{z4e`65K]_yHcG͇U3.Qf,gc30-aGKDq:a zcSThz#kZY"X* ˶k<%:OߚMW7x]睖ۼzH2Fb&2tC6g XѬ?.cHj|N _J3T#fe}z#Ά;jx, E,msMlb[۸bKEp癚ѽzx+zv|n0Rspp~Qtkƣ_g#[,Goc;)X}uoS7)]yef@Bp4`YbGÛBOo96o)I"K~)kt_S+tX5LMޙ"q'8)5 76-:M "H]ZfPOjee,2#::7N1q x ~D1 ޲9ۍ{u5MGw@#i., 0ji8cJ@` rHHNy!N9'BK))Kqn!"d+ԜtI<$$RO~tlz?Teirj=" "Xa*#01+ޗ }pB)[߹E~#? 3ܡ2\@r)B=Xdl1!da[am-Ap39!Y[ge1C >^N3I BlWA«Wul yW佟~δOnGs'X|5cO㨕Ώ=J%nC8#֙(+wᕜU 9e^בW?45:G%  * m [Ny,?am+ZR64hw\{U~S_7W~o0Q󫽺?(}gv=Mk}M7^]g>~R{jtQkGػ!m j2<(w Z榪;y^QZؽFLjJe"( 98qlyv?`AgZEK% YԚ3 M̆q킱ɔSc"QFKO) ($ﱙ\L^k2i9Z;]׳%./|x:JI7Yr2gɖsVd+ĥv E!K`qGE[r@f! jIrd̶.+.5&,Y-Xaۊv)@{@RΊ=!]>[?*aמq5s[p(s~JY/Q #Cy Rr[=h~$֗ $mX Q_dʹ&m#j2,w )W }`)9r*dNg2w Bo\Y{F.Bq{hʨvB`"'8t.JxbVs"k2cJ"]'3N\Pdo((-[Y $7FH*^c-;]D&'BqRz*+Z}+cˎT&7!5 u ׷#krkSjjtkz0|}0:;t@ID.վ<5yv*evY=rўԑbgSGa`FhKTf$qgo6(I䜻B*1Ʀ@2z -ٛ4州ּ5:Յ5&ж=5q-K+Lٹ?j|+C.H;|Pc3/1?!*{\FX{3񾮯|4<'zN춋"p˺f>? r-ӪxѮ /^iIii{6)'w^-'r-pRK3[Jt/zK ^* / C #(2%.|ރ2Iq{[f1@]@$.}b"였f]9g7r&E~:Aq㨀B'FX($Zj|H,M>ds(ڄ&@ ;@I-D tL B]Y`ɩx1T+`(e 빱1$& TxZĜS)Omk<5Jb^HiDi./ͺG/1#1T%fŸglVޖI :po40_{ʕ*-թdu:^gfX`svHθę?.rB!iNID%Ty$L:c@7hdqq,p#2J{=w|l#N\FѵKJb ,j^u?{ XjCi \!^thB#MQ|8aƵU?F{_üMlnQko͋noMs8cѕ6mK=c|.yxwwWw9>R#,Á{5\잵y5ﭒW7?^Y٠ƝC>>Zyh9/燸X<7j]1E>~`C=/S}./^67<>Pd XCgBC%S}(Jִ;t2h;t(I=]\aw0th6}݉xn;]e$zz=t=+r5 ssRu{1ƿ-zq$F+E(>a )R <0man kЯqsP*02y=oO+/| +-;2;W&Y `ϔ^kfE4 8x 7/d0.Z`~^4ߜ}gMy.+ aaO跥!\ZF OL3h"R拶kT̀ T4@+R1e/3J.{HEa׫c?]m=\y쇮C)l ڂTOWOzSe]ߘmCB~(MZmLVSu QUJD ~JxJtb2շl!v;䴠TSJP Ub'כt0r|İK1[!ʀ_`K[tp%]CVt"t$= ʤ/#s0jꨊa˙^G8N1"J'@5OyPؕ?qZQn{;%s'M!*s`v1%HiDJM{ݥ-5k*ç~( gLt3/N=B:/ ]!`*yg*U+thNW%5h-]!`Ig* ]e Qjk+a&Ct ]e3qVUFyOWjBLwpMgAD?t(GPtzJ(DM ]eu2Z}B2JϲFҊ5maI[Ujޕ#z];tV 得sPeKR3n:#3\Ѫ7( W#͆]o OW[|=gmVɳ-tW6x>4((;%1R!7+mzS%Wi MgTv3Z N塭iEhK CtBהּ++m;BOWe*#M``4~i6K|Wf;:xr9ӓUr|=*WuͿ\)kƻ$A ڦRSJ!fQrZuUq.ǃlaYp-ZxkJ*qhmO j87oZ򧪠ؤ;oA9>=91Vz0LpRE2ZGO<y0)gXa$'~ rpOWKhto5;e'cY^1)N?Z&)sP)c$pPpYmQpeL K0$:Fm 'E}Y|dnM t149q;]=hK=`?kM~DkYM1#_j鲖k- Icm;=^[JW]a^Cc.PXd*nH8Tq(THF1.DdDa^xI$ȘCIo:^(*>{C͔M2xdOf JJSoBGD8"xAVIi4X&$c }@HaBDE7h_9(.l!_մ@K.=,$;BБ:AG' p\Q`nCѱH-J`dhtXGHDyi,ȅH)$ 2`1hIIK,HT&Duq${h1u? %Ǚ"1yx_iWa*Q !pyw]l\O/],>g[ ֏*iZPvʘ]\ךԔF7ԘcwU adUMK*@+̇*V.۪}rL'U%0-f,ΊjF)87V-L@K㷧гlv9EwmH&oEC.3` Bdɑ$}˶,Ӷt$bWY)V|4OIN?V7Wڭβ ʦiYӋ"Hg]Mf1ޝ=eh+q䒝|0Dj'n2I~p}2Ҫvq_[K39I" &fW&0R*%Tk~x?9YHI\F.̒aʴ;-ҵ+Cۢ}m?-:CζN`R( ,m>]N& 6y~;Pd'=Z-z7pૢȴWH~`0wBBtc96m@D>lWw VV:t{YC،I9*hQ893,x#M{$/! OD\XUo`0se?Y|)ժ/|g7pRPIK!`(I>No$g\)v{8up&izu{pLW6'n^}{3x:{f^Z$#$jAk_"TCWh4*`2z0w|I|4kdLIYK7q1Egx8F5R=?q6)-]UYblŶD%zy~5}zI[[2O ?9B[}|MC$j񋸂x"K0)x|]y0lnRTɤ 6rKƣэc`P>xKvl"凈3:5QEn01YL*$d( r_mt0u.jg0KC,ǔl 4 "*"89t9F1܇t6&`Zڔ$dzˑ>1iS)ha]Z[43Bv?^L}ih D+1H2a$ d5f8 鈓IsZKpʩ~G{ ԻuTLĜ2Bq{1Ģd粅]. VUp?ձF\ǂ87n!(xHw 2 \8Oz,5|P3v:vD:6N-.~tw_^ܴX v/^hB1݁.:̜K zzh;B7kkP4e.=ct="d)ȒVq\>lCFV!6jz>k3_v',ӸNRv68$b7G5*M&,K$1N^.ܟ<'Ka=qIä*d&l*1ZcH[mf*.Y%. غ쑌N!$dUpl0'M9*$hHr`@-qvkZuLzh4͛HګկY ,7pYn|!f@&*XBO)P>P.rnJb}9OZx22wuu?ֳVlNʚizL5pIɥUd۔\K>- #w@pVym<荸x?,_9"_\z Pyd䖚E] 󥑯?&Q+&S Yf &Us͐5D1B< .M פن2 Y$J$aK)$=ګa*jV$4cZBӸk7D/%y<90ykBH:_IWɨy})|Qw6d=oh;[sQ?_U9aA*U\F)D@ȭ:#S al]2j49=+jĘ2=)`@ebKs&MQs-JLFe&[Qʋj3P[ml¥D٦WOy^;hr= -?kи`eПLs-st C2WLKAVh1?"0dlZzLU-PdIP/ڔ,l4CM&i˦pf`e]Mۏq\ծ6;ڊ׵ڊuVC[äm5cZ`0:=!r`[ZhoP)AX&SU{6.xT\C&d% ![K#HApLs>fU#&jo`Wg=lI} qc*X,bqEEܑƘN6O.)EHp>e]' 0ⲙRhIb$%YU$N.vAIknDI4.l"/,uLjP+E.:x'Pe}M'tZݽ:k7ga} U=oRb}t٬^3Mi|j^KBF:bnYr e.įBՖ}FUk'[DS "{H D>e61x)xkfE 2d*MLJ?WmE fgY}Ӱ  bk}C[F=oޔڕ 1t׻m wFozw?Ïї~Td_WN{0҅ȅOpr~{ۀ,-ڼvH념 `ɐ֭r'_lLvLc#)Y;}Pbij'?gL \1JZo9 JܣzQ.[+Y]772 b&ey,D"X^ ( Ъܝ\$w*;gE6q<:SN?~^B3uuy"q+_ͽWM=n.Z J&t <ϝӫW.o/bGcYB_ z{nztAg j]^ Δ 8gLi}-gJr%Hϔ&)5cݙ?R7O\w:!1Yk:7I%RJ+Ӑ:Vt7G{CVE:eA[́ D2j\t^=Jվ8*=TO~x&}>_fwt=_^[EwF{Ncs ou#NNH r] nuu3k{dG6=,>mԲwmx|xwsG+-χrWny9Wa[9|GcJ=M;O\<龧nnv)~Χ"-s\/JT\(dc˓]~(2ȡ(ak^;SO;QcsgKSrk0D#hUZZÄ\z3aI3AX%ɿ8{1HIp.բz^-0+90koap\K=u<űCG/4*Q;&z|B3ۅ{,,5s2 ώ< ߯߯j3v؜xKI Z%.*X 4Ƅ>;\& iBPp`QHC'Gw &! Q-Ǭ˂&OejoG<`s$"8x9 dLkX)*ƈ1`'RYAPip3R+,>xFj&px"9A9%M)b'UQBcI`)ЖTGA4ϧ4}k69 D1)B BXѲL \wmm譧]6Nghb}i2Ȓ%'g$ٲ% K6'9~AkY4Rb:j12d.xż3C( {CXkKC bCm 3挑>rR %Ϧ6jIMbIl 0'24`_cR؞.xT.%KUщlդzJ#9|&p:6mhF#FcyYBɡf o#γ$۫@Հ_ϛEk ZQH4 !IJObРP/+L%+B.^@W6iQ3;ܫؐO|_d_᷵Ñ&XMg2}bBr4@&asKU_[,>'fd kuGع JT.W);#P\d,t_DaS zqˈdu=T|̮ǗAL _<_"ErrȱxK91ߎ9o_g]6VBq N0-Sdg2PF,7J5%Q T68KHr|t.13E&F\) gge tsTـH&"hq14f<d-JLy$}3!hщReR#oŅ"X7JiB̈́8D&D) u>FCt#zn3+*RJ]*4,"FI!5 &P4rfjpTD8 "d]~8`^ 0XdFssM%m@"@I WLRj%<$:V{[}DŽs_"/-Zr)u؁Ol/A/QԲ~EµepЈ2.҃".CT>YjpH"+l~ zqAOEk4=w7F.8 I4s%Kճ9?q+\!gk0,o}m3[۩ ].?ԮkW|e/|YW$>hmqia>~|OӜWvB%?|0/~fAF~Ss19dv<2UGӅTńəOmĕO~EޯL] ,LAfd Χ/,ôf/wvm6':4zu%F ]id)#_Qp`NCKy@B.1vÕCoo9\EZ>\O?[?Ou|lg7ҕLg/KCqsE aU Z>]o|ni -eymjbڮ_͆Vî'NXÚ(žWu=FK4҄QFP镗%1@zpc:‰tVk,hr0D{\i3BgGο_w6Q$g!!Hu.m?4ƤsmR I6 zE@/k`HM g+{glq} ۅGUGLޤmwGug.l`uUV7կ/Tc"!ms?ea P8xmd>5zeԪUӭZWtH@ۃ&\Vd(QD5ZsG#,Jrkt'~ݣ\/k dO ڲ:ƾ->1ͽϏrֹ`v᫩4mo0xqh_8b]BB P#LIcM@l@$`Us"67<"pg69[:G M‚*H$hSdלcAbڱ'jKZ%]-؍96iJM$DHjղhTke"R(-PTo$!eHzсF@bLPh8"F/IFЩ\a<,&vէQAW`itJfs>V$.tU`eh3plB[)h uRpjBW18)G{XQ4;ME~ #b1q#KWU1:iɾ "Xqq% HAm 덇Dޠdx2%dVFRdYqx4vb) ;a[P-u{hIu"E~!+C>S:FRP`v~͢ϏfADl Lc"5@DLߟyA";Gp/#Hȏíi% ,/N3?d?q3Փ'Wu*$gBId$dB YhS"7'+mB٢DaRk(c$rqGc23s *Vn4EKbi8;fb]$r;>k\Yahpkřo'׳=ӫ۵]d{;dη1{^˽5nMTG"(.;FAO -`1RPKD#%Ӷ"4I5]_4at |2iW10i-y BEI*H єj\p!2< ឺ;BŽ1,j'?'QϽXwQӆGFq+TFwpg@Q[%K)n:{3UִBW:iR9wG* P/I Dr4+ʅt`j ԅTx!BE;|+C3qfZ CD^ "&rPgL#x7N`>,[u&P>ݼo:P\a R~79Ypd>;'C,i r0J('lJ.4:3׶pG7gw3f''8PBpB$u HW"ފopކRs>eX>&.qg; _?Oowz/oьg"nbwZ?~8y؆x:8z:[SarW!H rٻ6r%W?c^d@3== <.&ّdغXrtԺm noůů Cw !J}搽EᴳsR\s-Cf[Ee'8_5~i•+=Zl%[ `m.21pIhbd(u.蘵pZ,O<UVO[ DL3""Rs`L}NR Zgy5ʏtaDs.gz9J:͍֑9kWu}6; o!H7SNgKL*%̘8(Iz` R"89!AkIT^{2Y~ 57,Y Q` (mBxȉD[/NK] }p?!z^j؎~s,ji0HA( 3ͻx(Q(heGŃ!jL9vBslG{`]@[^Πxw jgwM5w}f܁ޚ3(5_GeA*ϲ$A;i]V=eu8[ܪZJ.Ps6˅qb,REKRsR*£TB!o2Ft\/*Y@BԭQeAv qײǤ5q6D  rBȒVՙʲ1z>&?>(k7oo1%y+9fQk %"mXpGqbWOkn.IY) %ȹŘg${j1ś.͆e,ޛ0``__ko 55]_Q_vXU7w4o~ ?nJuvܸݤ44/# EoZh.y֬sL5j/~8А҅= 7J-qy9?F6b7|=Z0v]j~ݜi^~5kC;IGMF|A8RW>FI@@UFcTݧ=0+u]7n1mlc!әeɴ{΍h#ΪޓU:-/ZUM36(-h(I%*¨sF0!fS\о`t[#7s,,͉мevͫǫn/ģ+EVwF}VH K5I@fT-FW?\i5Tauk6/JNGmU͛ X3%W}]_udoqxnz_`=azRӡ9覑̋_Ke|C}SLwe9H;,d,Ѐ:[%E6V6ڝk־}b Hh?"ʀ\Y! QhD& %N"xBB] kN+T DƠ)K0l&O`n6Z9˦uln5|3MucpBNt>51 )@Qx؟6ȧA gJPA aT2X?mVzvBi6 ZCYt-zx):X.x.FO;w-Nmjt|Ƚ?]\]1?ӊrCכтSDzN*_+paڗx l- ?Du!'牅3vnmcZCP`%%$'Q8BrP9| 9iPѤ8^I|8ڿgؘi<.G%yO͸Qr 62w5eFZnXPPBv̕L&.XnAJ|}B!vo7[mXs! `Mv;]h)c(^i_:e]*nBTV<)pǠ&e.%9zj *5Fw .8&7c`]+N70>'_7`6YVFHJm _]qEt{4+sSI'sc! MTL bѦPmyiz iM' TZ΂!NB>](ʉ4\LLހ-z/hhete/#eW~MMMYiKXQ )IƤ R#4(RMcdI'R"* 91ml]k}C)l'6C=9䘳΄B}|8śSsÑ&p&.=Ͻ~*`{39T7TGfzT*Coxs}O{9'Q(6Nkkg~ҝfB˺ܢ;K_ߞ󒥴3Ed:!.yදߵ?y3%" >H }y.#q3-'40 ȋ5Cwҡ( 5옺ZazUZYq6yA)Ɔ.{l*6 E>ǁ>$Rɳ9\{. HR@wr Y^qIrGVWؙc犫9Q]=[Zi-̘=Rm&e;uU[a $.)m7CS _~A ՐP$WZ]qON?aT4Qk|g4 J+rFt-JtԩiWp}.jJ+婫*NM@5-g*N"q sQWUZN]]U)gY;urԕ3B~3jKеqww_j<8q~1F25̐p&c_4.s:iX`jR PTQQ[Fk66?xV*)r<aNM 0 J.{s1c ߩ-Ƕ{۴c;-NqtEݡ}z䘧M<\DŽqP !?w1 "J5:?@Pi+uZio,IXѧejo2^V#oX-Gi`xg:Dk9mN8xnv/Ԅ& 𾞄Ee [4vo14Hwy2JeriARC{ntɀ ECiXZ2،]7t`oiU:XR j9^[c'DV%Ht! E%DžъiK蠋%1tJgX& 3F5]>_U[b>*Ѭfd`b:t43b:V*Amb!tefJR5ϒڠf }f<S*/M1kmc7l 4Ms)I/A`Y$W*^k qY̏3ʓ˄<T3}C4k.51͢q絔kWI./ޥ֗I@Pg RZ\ \ 4K10T[ QzD8 "^~X{`x+CZ8+UJQq}I!!LK#d! @ba-&'1.!wZRՒ32'Mζjy/Jӵ vЈӬZfPDW^[ﭑH0P&dIG4,&f/p!~ 4~!Gp _ϭ^hl2>+7ͫU4uur:iNӃeˌFg2σOiSҒUOq?lAM/ex49o_7U k!/ŞG}FLi|b'%)!K[2Zݩ*1DN\,eB,+_1p;ȟ@{7nZ;;gw͔y5e3N{{g]tKmAJӅ_(]XYnTtg`+C ΃W12~Q!8XCP&>0 (xNdXE@|b l^(x;P"^O,fc&-G GMcIhRV&E>Dudmt/sivmwig$]~u)|?JI}9Jݜl6ZK3rŜZԵ]s#09ͩē8I[#͛P[_-U_rRH GP57NI Ee'+}ЖK-u#v9R^t1Υ, ApMG&3 (&Í k2TmdFvdTj/Xz,zOor0ӯ|#)dRA30xaF;RA"2 DG" YBYYCi [!=H8_R蒯싋2.{\ܨ !;8:>F壋::RLJN7<#.{\<.vFOae lKueţ |_svF kPFяk(b.w5Bd0u .q7TLۃ:}M~ |W_G{v9WXIYT^h)6I-]H8@jBy#e9HI^uP'3vB["%,mjh -1Ys,Vc w(2@JF0v~>e@.ozۗ(Jۻq,xT!ky,f WtLɝ8$j Ig;[IŅKdm*!=2p9QD' 4D2t^D* -~!%o,,Ma k̠VqTv^irnI"0 H+eucPĸ8eO70ºSz::#*>vteA'KD0d9YK cY/ ]4"l%GQȣf ȈX+.4>{ U}WCw,;f9a! GG)e&}dZB]¶gǁ۾)F?|遢͋;k4 q0qGuXNYigח2aХ'u}y` .[wβf?$ I]\\.%̲Wdso>_sԭ)i\5=3/ 0-69a͌w(373ƫ8zX,y-^S\ _h8IiW~ {mbύq<0..wF)Yf7ߚO8 + ewS-+'l]eo.'vuYwg\2qtɎN-\Q" :.p쇫kğAEyHli ir5 m'+9)E|OC" ;. |pW~LRs~Mxtz[oo0Nqhq'XJF+]-#ڗu6d:ej6=]st1Zl9ڷQͫomն{UZFti:2՜1 l#[ T _f5N>~ߟޟoO'd|ˇO?'5w :ZIL/hc㷼خtZ)?qIh%PzZyGS2GE*LjZe줴,1`5ECO*uC;:ʉqa'G)Νuj7Cϙu3I&޼MFA2, R#EL /)vZD۹a#>풃a,D~z EdZB&%7B,eT>)i1`1H\(]L1ؐ9Nw8P' ܺ^FSWMNKzU2J^;+ZՄ-W `R-f'N)=ΚS+LX2phy;@60^9nO:q벅 t >ЀZkiY+d')g٧8fрQycmgdN-ോ"g%+ja;Ѐ3<r23ZFZ])p Yz^cHH^!I9'BKt))QH9' dA'$OfCW@)d]rⓊ":aKE#11'!d/0\GT-W] o\@ b!t LD$w$gI(ERvuHf|V3;9жnO;zw˸3\47H游|UpI26`,*СCyh6*jT*;ѨxTImufe;Jb/ b;H[[3Sk.eͤeMV(A0{d?d"ٮYB!(wEjQKZ(!Y; nT'^|!StuA28tv6! _fL/;%`HT*i0?"Py =SzwرC陪zff!QD,Kƻ_d.JUvf-vukOܕ^E<&wM޵8r+׿"$pQ$ q܋ a"_b\9+̬$KY[iGѴ4 xVS%vwUb >}`.e\Wٍ3~˳K!=\!0ܔuD7c+Put\ *G S UZ#~vѷVk 6I%ªRcBcZ$4B5Zág%R+2Y\eA[!0 PDAk5wYz}H:[\`M}kjp?ik2 /;DWEwW~[6):hFiп!S44lv8BcvOH#hذm+w /^6M`v}[-noez}迈j][7Omϯ,^jo34Lf[?7 |-D݇}H{_˩û~):_ɣwZh^Y Eu6G+B#Y#eItVZcHsܖ-yJO "-uHgHçtWIYɧ_)wǥyr{Ի_\DvIW >HޔzLm-!^go`Cq2?i%ne ?tnbAEe}ێKzkB^l+1|!b:Zy)W~܍wRV o746lќ CKV 5SEQs8Y3 \1 tRhtpz2D 71IE 1ZdRRhU~\AA쀻!Φ;jN>q<<Д \aݠEK编vO+سR.ZDkR/@jNJ{W6o&=$My8qfQ?_(EB>paI;ѭ>M-4F߻(#:ШZy %#8֤7ZvX-sewS69"Y ")PjȒX}b(SU*9-#@xCI f>&e ŃЙ.C8͌:#v6=3 h<+(Sh~i~Hޜ]֯\ |"hO]%*sHYP;y 3g(}5Zn| ίs͡Yϳx}P4Q357}VvX,-WݏTBUrǪx7^`.Mt(MC.0},ʏ~ UdRod7KXҟe]wsds/wKOg1ݲ.p%hMN.14I1[ Ditt68]5djR.@ V BF s dW.4ϥ2Br- M >766'e6Sc}v[R8_Q]Un;L6܃vɬI Z0\T:ĬʂƘg'H"mU7~MkV\,NN0p,SkaZv 9tO@f:v eQ eF-+Z`FmQ`FmQ`FmqUT [y1E*hK)RQf+̹(xE*DjXzjگzjگzjˈ_;Z$-mZpz{kPy@: P9$ֈa& %<$u>٨X{ [, UNFgyFl:g9e\+i-c=)5MٜyM:AF@aJg/ ,כr$f=V ^S%fBTD DZaK [ V"#Ū's U? %(ܦE/t [K!$s)UH[ ,\&'f27v 5ItNrpX>n~{8bv4Tvy Zﺚ>FD|1g@ fO.A% U4d&kL88U[q 9 1zZY$aMFA0%`,2eRFK5jj9y<#@ِOWY݄|ToE܆PW4bQ 3.`4V&!/2z6*4s(Isc,SF$e3hUl2('/,6%}=ᓺO?|ߺZϺ>@W0W¨_~a\$_+Zg&1II7Zit 갪]ejEnO#z{M4?ŢtQ+}SDcK*1?*P -T]qɗ4}>>ӠcgPwNC;Ӱd1ǻӅ=Y<07?X4v8a4&gs.LfjVD˖Z";݆*2.5qh U;q##zy&H9W^{S;rc$S"%+BbtW֍/릣߀9^,E^l`CȑJd%xٔ.ސdA䍵V׉+u3IOsiۙnDm|m<1.<{_u}8zNiEImf]{ϔO@46HE>ͧyyCwԳ=v?^|@O)q4()ҒcEN"?2UO4:p'":` r&HTڐ]q `it0n1UCؐ :ݱm 2zubdL1RHֳlxT(wTdqmf?>6kS_{pV0)Mons3_J>|4*m&5;2gl/$~kmH /2r?_;A}@G$",z!S-ǀEi]SUw% 5/nZQQU[evv@K"l8{ǝ[^9廟c06K[_\ #uWVY4k{ DdnX.Ni=kdAkBl0lk` Jkhohg-%谼6zd+oM_p-xS.üm|/)N{].%`8nɺ+F&8㒉svpdȨ2`:][|R9}dAHHWH.MWn'p9E-h׿/cHj|^>жh_79͛L;N k?՜9mLMȲ&mDŽ q˖@֜]7?[bv<&ھO2R\s<q] `4t1k5{bq[7kGzuÈ8v6hn:e 4W^yrv209`Qͣ.&nus5Qc!#e಻}<))<(j5? }ڽl$NmTe-A<9?&v~xH~;a,ɔגM¯':˯y]i^(vi}şFi,WW틞DR|RUdV+.+v2y-UCۻ[ںʎ;G)Nmb9[[ yqbo+[%6 !9+9eY퓃̅iσ0*TՃJG%c&b( .4`Kì/.9Nw8{\U'v ^Fp1?c}RΤK \Շ+Lӄ+][ͼkW `H &X'H `{)z yqv`f<č&fGJ^Q)%u) ' <4#.y,2#::7N8 Zx ̂V"Y1 ^U#g;[W]&= o):jGip 2=) Lap]sR x!N9'Җh)H/BX pN25$JJH֥*ɨ>T SPLXĬ&{_J'ShIra;ApodG!TL$w%xKHQ Eʦl{#`}Y{z[^ΠB@BVVVwxLO Jʄ&( Z=+j$  1*OdP6(ƠŇThjF\E6} ĠEӴdZ#Ѐ!$V F 8a)Ȗ̈́bݓr;D*eMb"rr3Xs#H#5b5rkcGH/NuUYKvՋ^^F$@ 0g*XAV&P9Rpm R,+*b06zG?Pa2|aY.@^qNjLKqB~³1s.Փep1G+ <$xyZj%E2<;Ŵ 2 `$]Y n}6՛.w @Ж |@R}* 7xm1y,-Lн%Nꆹכݥo[3:3Kh,d4`]EW֩+PjE9ܖ_ WɒId?߻?zՃ{ b,Wږ+ȍ6L5oV?*h{읢4|U@gV?2YEǝ9$F~ZdUacgהMVr&rArdsȸYpbp.ifLɩMBX]6ה3{nYspg8~9IgֈvŜۺ9s{ɋNшX~ b$p.Y2.FJNFJ9Sq IF6T~*}v 󕌴.%̫z\i(p:Ҭr/ }]yZpP*@c fԮ옗#$lo"Z{}w柣rQ_h>|+K2=X܃kpRet/ {n[֍຿H념 ѭ|(W^m>M#T[Hۍe6^6A h2 Nl= 冂vCʹI \1 \ptRŒQ 6hy*HK#$9`u#Ѹ,uL fx* XxI/E)m)VetF&pQTvCU#gtdijmi5F"~Q,:ڊ0{Ϟ{4 'Q$P1m S&t`8èE,&1Nex/Uw޾|6{vMJso[ |7/dҢ}F).cA@dɠyci'iP(dM2F3aJmG)^d@R'dؙ F[R4`1ֆp5@&]ɒAր@Q0R\H|cbYPo \g yZaa2ƈَIC74WK iSB۞y Լ>zBSqluj\zG'C7O 8,>4HmM̾tK_eOKd}ʊ "(6 6iAd.J![YFfIٺI\%nK8oGWEñ*yxP)Df>M0LZG7kxN/ݘc7oo.ؒ ߺS=7T=a63@.@LW_NlxyT`:h¢ *$ {ڻC{w螺CIbX$WGXNV$i2i \:|х D_#zO+I>> o[^vk"76-A[9mzjD`uCMzsNm6,Óy5eC-u$<-7C)ͶJ}K} R֧['ӹfG4֟σ ֐Y7Kb< W1>/t Ո.L0ύQ4.[dF>ى'y[Oi8O↾5Y!/{v1x+^l&s2b0ك} E[,HrfϟbK-jY[vn쪏Uź!^RIEBts4Sɀ$SHL`W(HmjR鈝+)6yuh֯3P -}YvAe VE(zMgaQ+uFΖJ9qlJc|is|ufmp`UŹ]q;~k鑖҇ަSYaB $+-$D_Um*PdTp DdC8 !F ;mr ۵j, 0@V<%pD:6N9BΠ=3|YUx<ʶĸ7z5B1S!C"+bU& )PMGx&y>$豇=G֐oOD+tq;],2"+D$rU 2 < OU#*ߟ=8b<:F>F^E)( t0u|ցMRFŨPr)x)=`Rߙ:ֿ }#UVBqcD}RԵe7LJ۞Ȏ.@ 򦏇 loƥNo%)Ɓ{T9i:'L(( ^w񶀵M y %{I'zM5,R4N %7^);܏[7qyU6X~Q,Aս5JE ]_>ȓUPd4ejl62ۍ͂7Hƕmjqm\ݸjO)^cZ[ $?Qh0͈S*#SKv>in/-h[~/1sL'ϒ? 2^t6(~ɟs:#F2 |:1/3K1,~\p~~Nprq[Yg;y 0#>3;H˙yp{mہrOI͐e9dgg~^nHm4MaKzU7B^- ^تb;'7՜p>3M- 3cU|$,ͷrHww\5fObR۬LҲͪKmUfjJ7=y;]XՅقM5y:[VY ɼӭ⎱\;p~?k8:ؼܖ:AmM O2*]HH6ADXo"Qh18'2KAыэ3zvXQ7:8>gU†ՎY ,=I! %KW=@Ģ#u9и LɅ:ʁX3_]>Z3l?]osNG9>M8'C+:PYgPNγWQ/J#=5ay#P}=}%]6!Qj6g.8O;ѷ\\PrWSET2/]=̧VK>N zMPh .kP2jCa=7LIy6IP؄i d3)7]%01 zgl$xR{R2>oMZauyRPv^z|1^{36,1+RJPX\f93s(Hl d1(yY%:l^;Tim:m3*gI93DS|0;)KE8V Saƴ(x~ia݆Uy#y1`!d;QQ$^(MG}ϵ5mKj2,EOZ'ŊI&و`˕o>BOGAXvq^bI2܉C _\<9mF &B I(띠 `k7rٿDIqRa;4g#{xMpz3xV}CoFҟ u>}ж[q;r>YԂzty@S\L5t6ͬ?8uSQlXM}!|w? <ǺG3N(qh`Ik Ԑ=Mym͈w߁# 7#(/gztuV/{ߛThJZ~}i s^yߋ'b#:Fϭ1FՀo~2gj絶NϿm{Y?}Wo߼d5OsZLx2 mOG v=xj/ k@9i2:ntOV+Ep U?=k%CBz޺qa;GstXj,\^O3WhU߽MuԾȊ$PfO9J2+3fSF&£M +7Cr00x!ԣ$jGXXMj bF;JD k%E|< 砂:w\%v򔓜Wbs҃Jkcҗ WNZOҭ_8h&䨨7!Ke})gW `$}[]"82CSPA?$V_d ,rHqdt_g^OJ>) (:vOSɥWN+'NtM4T#kaqBڐϮab Fb D99TZҩ;p;(_վocEDm;r s?^ϗ)Y}%g3|T2:$%}N O@Lɮ xci E@q҈W3Au[y< HB`1D2ZҽPαUVG"v}>9KZ=xǢ5d^ڌk_|}זcUpZlƲ`W-/dYZ`Ո/6m5e/GWg~9:0(K~cFy]IJHhPɠԇmP.kAvًA#-)t(ZIt^A%6> t<+d\ISuE" Ey@Ta11Z9$XXW1@.)۱:9[!jc~-pQ5SJʰ,"sf"e(9UbQi# LLv#ႪiҲk7D>Sd SF֊9cA팜u_tř>9c>d@&r&yyrtxX6X;䣍'ʲk ~j8*oCdS Ʉ\eFk>X I8U\Im]ɹ$8bca$zKţB6"e*!" & "IZN[{] /C5bVBZ¦J&;+l z dB=g-;#qNqcb jw;6nQ , a/ #$,fVUw`UrVC`A X1y-d a+59ƚ\Ks(R-+TArZ({r3y~} bg+"vܑ6 d) i6")rFńUX-)ZAL3hŶla] ^9d-Cl*f&TIx[4yDq::iP1"vF9NU_Cuv%Q=`2KuCJ&$S yVJI r M<,'vaNp fU? 0EAp/*\D?֯0><]CH̻޳'\}.7o~YųEͬzˋgO۫K1~i _kՔHn?>ZvriT[*g4g-:"hKՍ5=rnA__" @i}T5 1& $u~a?{VdПff1E`0m] 3߶?iV$8߷%zXddb_yfPFVmXX~}޼+? ڛagq<̍{z@ǿ7T8O4OPsE<:Zh4>?WMU~5ÿ΃k_0a"zZ:V P k2z h@SADf}}gG&QJ#EBgB@xM)gIJeS"YCH\xQT'_ %18INaSX]5g?}l5iU9kJrp6OǷq(n\s>}(qͭ$aWű39u'CG_Ӑ_y2y~ONv!YW8SJH0ɝ%E Ў} 2!2#!o)̟HIaZKЯoK;^4@{nR ׹P)ϛ{޴*ƇܫAO7yЛRb~!_]y7{o}n04⨯m{^.&u';#]Xk~ܱ~_5A[k zmHK%DZB=(WOݏ:[hPdr W,(! oַPn(x17T=7IXA(20V )EXż$܁"ꠐGH:zsD21%$R%&҉*dMmMrJGٱwCM/Jh.e~`,VAC0 xiD}/.Od@@@BLFhdUZȶNDLz{#Eof-׋Jl\(v-4D '($&#ETm1fCحd+^$Jy ". ɻN!r49U>xUsƝKB8t90Zl\;Lƻ.hVPudtܠ @MZ 3U-g?]w aQu& :3F 6whs^;flPF[T5Q:KT eRu_T&l .t]x 啞/%:+^,{^‡Wݝ6C#w6mR,E9 vW]k;jb4ҳl^zw:=,۰m*J.uv+_lPZ㡦8^n.CAsQ^XlN.>yIebGF"4cn 5Yն l.]*aA !d\Xs~T _Q;)huWoƏS[9GlKVcqdF>i]8 ޭ xE:YFiUfFDo<)@tGA)MR;K [H։O]v֜Iʚ^ ]ڒhq|mqy0 #vziU!NM6M6ՒiTN`㤁XhxB坰zt@Y1Ь *zEN%Yt)ܘEBS„u) Ys;Af_M>eMOsC .K)}Qh!Z^T`i$n=}4;C46i^n\v&bKXQ IƤ R#4(RMN_U>aD'7iŝYr :B/n^G:vgcL9+-sk^)m~m%l!!Zg})SzU?8?7NEyqfB2a{RYK|woE`̢kVk?ksYN3Ȟ.Ȉ D`߅6KV+2K'Ib-m(RYlJvБq4Nj>gM+ADM, z֚܎uM{c@~aQ1A>eme<6s&#@4֯DBl,d YȬֿw!&tMdDUYg+,ߩ &r/k"Y %rh]!sTHk 4OVTNJHA%P #H @`HUABH2{TYPˈo1bg900k|do@n׌nhZf2ܞ밮_Փ7)"GZw}PL|G"}@+YpO+(K#Jn ~ +8@{@6un/S>' {69ic˜Sel`>͏BOh8vN4239bm.Zibvȉ#A7 AsLJ.{K(Ȣ\xs6HTr~THч@grjٻ6%WX #%::؛ sq?B?eBeqVϐԋ-iiNm}UlnާCK_3M+XBAxL2.'EւT'*XILqDt4 *[I+^~|@JbTZ%ra6VVj٬:rӮzoZMNverOXp:ieVq-}}WXI&JVWvqeГh>lj+ Cb9b];5 )s_7yeNpSwB[S*K}Mmmʒ 2=mʤH͙ZzLFe6͖Vf IƮVpOIȁ^d"]lW7:`z F׃dؐA՘md6#6 %Φ5CF"AL4s A2S̽JU"{> &CJR!%1hl2b0ݡbIǎV[V[> ص6.hxΚ1-mNLCjEFi*3ZHì1XŁ2!CĢDC&!ZQp$EJ,F}e{Xm8aKY0E&Z+qģƠEiG"ֈr1&Q\}T'TT  . +E_":I 0P.k G,Ј57mĤY6rFeXm8-GW.N&%E].ʃ]<ŵH 됴0g*XAV&P9Rp*YKcT.>]XK:vfmmj}h '=vimIp~8WF8>ߦ;sM LMm03рLdq1Nz|ʔ@6%QJ XdjE:NON()@+,OHNNK\,)A tRY C V୷AJ:Pΐ 8c66-t2}+˝e$ؐKW,/oF{oQ5a,x Y6x-zWt]nC5I@bK>Ljg .l"B:d*kBЉHU D/TޔVCn~G&&%۞`zY(Q2G0V&CTtpnH#Г"hbpV["=abK+6/ȲGPZʒ'ʂqr: /"a(t"yz9j^mm?J>2wsfmw߁(== jfw5,}9iot]Nc1Ka<K违DO{Y=+eߚ, opB/sz\ns۪؛fu5,>dLfe{p-ec3.8dGgH-O&x7G)y#fvm=#cX()*~xn۩Cǫ ӷI, z7v㼰D_E&_u_[b\5⬋vT?ۇ[r]c,$lhY2a_,g++>}?(xc,WOTVY͙QfI,d1y.B",t S*%z5OyĞ_ǨP㝕0nYw  ^f}9lc:,Y٫jdb|5^cׅOal}΃.i;K`!Kete=M-ofR(4Ej%Nj51x}@vžPގ_y;$=q,_ENpځ76-:M*yAJ)99YvgiDqҸYN2#Mg`*Iʠp(1WE{_yPuԎ=AdzIL+¨A]s ^crДx!N9'h).jx@ɗDW ]j?;|> UYt, cI0j d|.*ўs;վf\Ǣ"'F? g"$C/'x\@z)BT6e{б=ұ1vK`|е%hk\ :_(hÁ>~$?4;maA|<^d#M_IGmAAA9a =ߨSHI;wK  YԚ3 M̆q킱ɔ]"2]Ien1 ~ ma@e, }8 Bg,m9V%\Il:0~#d rwdI{6!EL~luNiJg|lDqI̢BLٴI2ƐT6X+{At\`Xht ܣoѩw9 Fa3zn ⨥'ʑ.|Xm85x)'=ޱ^x4U$pqկYnSteaV|)f,_Li1 4^q4N 2~V ^~e&_DWOH||yiDJs9tj7\eq~1O;ڟFӖ|(bm 4r^Hճ֑΂b,:(۫of>d[ōC}k2 5Vwo,afnSQmhdv5rwW3't۟~uk?,%hUڎz$x1T՟e+ qꝖVȱvo[W}4r2: "6dh@jxns#p] !ux"^{DPm>A.dYsCd ruNq -IDFWEN UgTɻJ!2=#D9I]_k g3_}b;n[lxysy}+y emtUBDu_0X3%@́^ 8kl:h¢ *$ 0n1~lNl"WC+#kW)69:z!tN?!:J}6)&_r*36''jˬx #) s@F֬ )tæ ,Ťwݟ~] [5,ŷ>98w#Weݝ=x_n(pڟ=^K.Ͳ:e/׏ػM$-T[Hۍe6^6A h~Cl}N + g C d`(Tp Ze IE/|vX݊@:3K<bV8^JJQ BrwF&pQTCU0hγyӊ᧦5KL3޵cٿb iqI^/n `ݟv >ÊHNA,IdK,eWݪRHǽZhsc,"h'Ayĝ@ 2V5ZHx:MW@)Yɣ!b,2K5ҶFl -1+Qu#=V$ZyS8Y\u( U]]RW$s/V(e$]Q[\zs9xSg?LaI_j:L 3OoFü.|Qd~6v˷%%B<}a~~TA(~cf%&dAp6mٚά]ƿKk~J ’C@!e M%ھ:Qɾ:_R=bS#r&l yδ|ئ u/.rUܽ ?th|1@"#Mi#׮:3YY0uJ>\d)iоd' gUL4R$Qp++AP)1IN .yʭ&"xB"Fq / قbp.bA¦,}rBZ#V8&rz~{GӉ|COvO-ÓW%2J,< Fh+!Ke-d9Bt J睩W5+Mt/޲dp%doV8!DۂprTh@/]}mL#$9+WCIX:s/J2^b1|',[g[ 񭮮cUNXX7xBE-` jТHmNtw-,[ՈJk!>*sEPc(HJEc!X+\5bkpֈl_Z6lx{hRO>̦?sa\Hs.}ş ߙl^Y~Li[QZIz*5^bGbкlf}6gotS!h f^`aJz,Ɣ{' <Ҋ3 >-4D '($&#?dPP ^2ImUgK$jAdEٙ"T:yx)Dƃ7NKQe-OyԥM]|,vIHҩi@C<&܌{z,`U\zHzU8-._LnoZŘ`H3iUЙ9pȌ6Bej?Ot(ؠkt<'%@ʤ0j L/]hUڜ}F s THч@-BDw; Ҵ,S[LmdД_DrK.~\׬o֘U5u4ښ%[<5}r? 6]xfU³aGF=iIom+]Fhn*:v+ Kx݌n_fӻ7?^pClYalHw2Zz=>; œƣцͯneަ͏F=gϣM09qԍ~i΋?uß:^??Y!M{}ݛ;Tszo~$o6hdf.!zgLZnXMPBv̕w!zϾOGO|e'|{unѶ[g|6f6g'`*b,Ρ>Mv;|62Nfm7'4Jm {+I*_7gć?< =\;3ԉM3ܑ&nm^"vuC1+`۝tLLrB+'(c`vS7ul0OZPjaQ1ɇ>^I bDe(SFBRH^ceyJrj7T"RAZ2!/ui d d]58[nP߿(uny I>&r̷KNu NОWa1t.bs~]M'k-{v|K]]5I)ѼnomlBi$UXrI:`!)H%YTžvo౬h!rW:{`2k_x[S V=- w`b's•mŶюiA–34Fd: rx >Xʗ@~T4)1юW3IǛ󉷠mL:&A[#YIv> P?bY u(cӕ؂%j/XMX" 4w6چc( :gV;̱f*u">x' .AԧKH#8.VΡU%R^x em{GS/VҐ~:[~*GbfmQm1RHRP;:m#RwL+0NmPD\ +RF huE ;9-ݩw981ڬbJ Ҕt" \.!d]moɑ+p@/x $!Z"W9!EIGئXYfwLS/]]H*헝1V8f=GY}>Yd:=n>-7RtJZz4+"("d $hi9Aic` y>Ï >RҠhA`H4hH"h/F_z`4'p=BΨGyd`fePWN;E]Ii-i&ʄbp>鈆9n=g >+tt&͗Oc=Gސo)IR3 )dHdVJ&Kd!K!#Hr91za楞l,BF\ Kғ 6z9̂Q;^?,^B_ ]ӢEf?01'E]{v㸛U^ċ7NA9s,Wp *R+y ΁ N3uk\[Le-C2Y=9'QѺ(Kh eSh`@sǙsޗݗAલE&@=MV)_=ȖdZ-Y*{$OOV g14U;WsonU7/oW/Wͫy)xK}n:Kf-331OjNrvuU֗jڛ=l gҚ$&|6Ol~(237_?L1(ӉyHkRHVgϽ]/7xyqO`vy[{Y ot|4 ejXVGbYݹiv.+wqXM);=ԡ_;^ԖJb>Schw4I%ׯ3 (v8>{n:,e&=SrW/~ǡ?RKYġ s,ϚdڳWT⺲1؟MT]wi܇*^7(9#۸;Yj_/I~JaWk0-NzhO#=-Zh1t:UOf#-[RjHӍd'VW_ J~o_fVs=acAK2>*eq[V8R]c7\0AAgcxYym/glPK__uԢ!PW(YUU^`\M**/iu/`3s)[djəoʜoϗ$Яg}4z#M N?>Eq4>xweq Ij IZ@,7QMsfŨӰbǜw<ߝj>tCg& aOW}Y[P3j7{[G옌Ƥf(QIydUԵYɼmD|#&["} d~`9s"O T 'nFhtxJZP6ZH2a'N@p a/KxnJt-ІAۅǪe̷EE-^\|K^ V}3^ N+^͂=mZlu^gÚt<$U U7tJ|^9+԰:gPSmׁKꝱx:A~*mZ0ٻq"I).*Gct[2R;rP-\FtT .(FPΑ-<'Sb?3r,F>P~1 IГ_l#\]^m=Xܾw/vO;3kR:) 'I@neH:DrH*%GN;5* jh:gwINcS۫l,Ma k̠VqTv^irnI"0 H%sccPĸ8&f^ZXaaNH6'Ԏ4QY"rE?y'HxfaG_L_K5uJ xg 9ŵC5K@Γƒ\q H0cA:d|e,8',$d=dRR^0",X޾W!{B\oCJp3<ڞsQ\预'yOKL6ʻxpv;j9CUtm߼"!6yRx㝳ZAG2˫zԏ.'sן@S@kl5e`65K]ͪ0}g}9ApWnU3nQf,u3$5g8H>`a9NˁlˠOA?L-U'ewQJs[Qk|n%HtK8*˼4kϏ8h<%6ݿw}4Q;ʦq;9r 'a89Y,I]kğAEyHlyFB\M&7sY9ա.+t< %{Mf ,JkV~2a \D B ^kGkV7}/Fz Q Jo-!!R^Ae."lq -^@v@nnGw,%9O*d{7] UǤ—I*R)#WƓNܺl!;,4ZF ?kĻ-,g"!`>*!qsLsSFt$xF_g6VU]GY ɺgFyD%2dY(f\)U])p YRw9nMGRD*!M9ZKLIAX $#AK@tB. /mPfڠ]PbRQD'l92}"&愳9a$\d^wGT,W}?E~#10a<2.C$9COãKeG26z ao=ZG :+d(A80XJeWQQ0DA^Z/Á ^{yhk|="h1yAŃBz<=Sh Zo-YƢ1Lh![ƍ%[Nal2OBR2@H.rc#馑wG}94l?r3%a^^4),}szEETG%V0Zd3V'W]YkIZ}f:[X+>xst=' *8 >G1b02Pe`-[>񁥥29k߬|ОZU->EY>֔Ygi%޳QkH`e݆IU.^\-Na dClPm39eOoX*߰^8 eN ")^DL"{%1X:2JQkYE Fg=b˘ RHV1KɓL$Vhlcs3rt(>Uvk[~teՏQW9a0B"zX,JL3>'E6DY+G$bQ1E9e I@:;I堏#bKQT sZcA팜݂Z7V-Hnr3>9ƲIr~SlٲlO.ZM SlST^X稼;sR%# knz_|m^[mrm lN)Ȝ$O.eAm> ˙Fε @kd쌜=2n;U:8P,tc:b[wZ^˸H} ΩY,50>Ou/UN! qƫIha.E4 / &̍N[:=$VWMi,ЂF%+||;nR/u#vgF8 /wښwښQh>y!9ƌt99R<2 k#ɫYmDF̴2BzS<iC&ː+A%I4@ሥ 1e]:6t'9rW= 0;CIq@=fS@DIUݠdx l)ȗd+A'$wV>r DG" yBY,W ;#g7"~@zՑpq~ܧXgg\r(.qQq񈋏Z=d'[Ǩ|tsVYGGfI霣)*gZ{8_ lz[!X$Y$~ʮQOkBdi{f8m9}֭sϩ*)qzŧŽщceǮ|8|' (l}Huf^&{| eU/;~|Տ읶:j2gw6NNΆ ^|gwZJ 9^~\IJ8kZ8$ˋ2m?/t(Gi5iZOۋ˛9௛8o̻٫Ƽ1^ c9n-qmYog7?^] WuX4wɶ7o~wGNMY.{iG]vLApaTbc CwНzֱMC܎5'EdwgoP̈́d^Oyc<Ղ"Z]5 ~=2z:Vg6$>u6 ՃPZ!tEڽIn&Wv ?Ȯ&mn$`텷w<%Yl㿔xⶕ0[-xoE(i}v& ]~%^ڶ<*]0ogbmv~m۳Үޡmᬻ_^oge[A9aܿ. W!?}sōV^;|׻k/|~߻=t.W?T?;B-g'!gsnִ=[f> WtשsKnCֿ.)(F QE|^w(58n񁱹}94:yOgЉi٦??~zi%(rեbW4NS޳DOFXdF&ո MA~mw^xwKH #ebuB}n[^^w|\`T 2 e5Sq5Z&+UN& 5Im:"jVݵ]֫"IW>Zʐ! mb6ĘUZU jE6!ztCt;'L[jHKU3REFBCg ǖoA*Q2@DK\Ȍ֖JA*'\tҗK:Hfdo5Pc:xFe"N{顉xʥZSJȒubs{IF\ȥ dƪfHxFǤjQ7sL(CfkkpOFK$3yRbUM"IuҢj>M"4)lR!TP2Ɗm|k1X\5jY6N]jKd!>=Qӎ}*󫳐MκJҚ k,c`ک)l(ysl*I+J09PemRmQw^ƠL# u}Vһ10y  C\ (( E? JaI!Β<&XabXQ+5YKBS\L.l!bC AvyczD R ˕3ȸ@æzc(!$( <2*JTIFyRU5FeAx`1f#f0a ?%(0 ٔ|+ppRH 3&GA2PVʤ}B8fVJ (ޔ]ME8# E0GQF WT4jdIXG@ &5-*JeVJA´JEm&6X/dA› q+*p/} ,\UBN~T}% yrx*Bʂn2De%D w22}.c_;:l]LՐ*b)VX@(;KiC<vwxC*a7]/룫 U\k ) Zc2LW"=V h B;h1#-P|t )'#XjyU3r@`X%]H σBw:XF"mR:2Gm]aϭƝOɁHMF|d~ȃ!28"22<\J~,r0碬"LjtE""*PB`z;|֔jƚ;|Mmt"i ,\#i%A pf)4-+ڀJ)gۣ, x/z XHKOI@\4y͸vZ*AD{0Ĺdn 98 ֢1[IITRQ" 0.K3Sff%DHepd$)xL[XГ\J.2p`|zӋ7onYObe젔@J;.ϟ?i WCыIR i] ,q="=NGu+֯ nhq3X4~Y1^N (|OOC|eN9cIYu17 OPajfm˛b]'ԱdUk7-$%\U=Bv(P\4\VTCȠS|m}b)eě'y#y&s9x;vy'6;/1|W,OYŏ(`a'X{Qn2/FtOJ'j?wo%[ͳ4KQXIjgΉDQx96:{VWzm/f`sr]fan& \7vE&uB!\ʘSzJ.֙j 9*UfJ]޼k. mNw?O'A/""wЊ?L w>zi,^ŏ hY?ڞAF,[ RueQaOlbNKYURdlsZ=_80P7-/%aLq7.ŻD1?˹ ſNR<~ *o.WO#8a-j4ġuy*˾32td?jCV7U[2F0Zq~o嬸Lf"և7fFw<;e.4;dra=Ra-LmF3*idc=qI76FybT:.dw=(9C8UR׌%~V1'']8䣥~S#=SUxgKyvg5rhJbI\8k7R ìS,P GOz 'LHG`o++ ŮPchx]J&ٕj b*`UGdOL.Ӥ%&5S1N=p2Wckd8O'rzX{kp,% <7 yߏ./ῖѪ,AeN;{{H˪-Բ{.Coݜ\q٩8ͥ0f>W[܈iޡk5&OAU]Ѯ-Mx4iWF3v%Ue$g$ӻxjN Ө9;N qʱ;QSت=1^FǓ $JdАq}=zclud1\O7q6McoL O ,oԚgӨRxnǦw8cN:6N>vjַv*[ u®\ rdW{7uCv 1Sվ2z Znכ3)ȦAb*F\bӨVخ4l6- , B6B΄bWV3uB)?v%c:$sa+D(vj]Jѵ{dWObW3B+|]FcW쪇vQ< N` zC+Td Uvm,'+í`* B: `ZnWӭ>ڕ\]`m(vE+T4U i]HvRcW(WP v쪏ve!rl,vgTK+^ia_!n,{<7P[LrYDj WT1*d1g+9G^N2v*]svɮnzf%@JVtlv5)I`t\B1iT3MUjI&CpTp z]Z%D UJCvC e+]0vr%&F)0T]Ѯ҄t+Ntr ŮP|2* UJ+mHvgnW(Ws ZuB0ѮdW(8 B*{Wl?ve5o.]` kX(vjmoJZKrZ Ro'sv?DΑ]Ѯ,Wu]CHT1ة`6jMe6kk7=ov^'C Pp@ Pyֳ·y[ 0^ݣ`sikW-㼙JûgU},^Me T` ݤіiAP- wҐn gqi;pdRZ%U]Tmw@.4.-RdW X ]\]ZenWRڕ4N+u0vrŮP]+TٱedWOcWB+=ƮLP rnWiڕ% Xp ` ժԞ쪇ve3dWzv7b{e dTJȮzhW LC {+$Sܰ]Юp{ ޢ]TKw|י*ӝiMvoGF6jQNP'9g5&p*?qu\}ʻVt鲷\&|u?`]$p}X}r5ʫR+Y4v̸XS)p&,[uugw]mIOD2F=]AVGa\>i^uW~Xy}?x1"vHU/opaXH~5^mUݬ*C?B{F;_gWf__l__WyQ߽q]R6+p&[fS)KD4q%O3,IbZW .i3@y4Հ:fȭ*\٢d9ϑSq[mda4ܦ\"Ϥ(œL|)Q#v$.]?w.h^wgoتGylO&p?N(NV{=_]?㹛]?x,u-#|cg`͡dv O5n͹&72/2_ƮVagEe ~Q Glmd||t]5hk\o<ɽ)Dt:e^ L2.Ef/BHkUa TuSN cO4vcembN7w!12ϟNֹٷb[=-$z1Y@ qD ̊|-,1Z6N=X`-.}8c)"KH-rƅ0Ç~S(}M9cAYQydj'QtQf\XDB NQmyIJ>ABv(KֱAgcH阆ťd*V䱃$..X˒[KY̆,l0r5?k=]b}WlSv,-O49RLBbgKe:ϬrE9b9AOl;D!#JR'IRԖRqS(dxʽs.+EZ2Y:F-pH%tIm44e"|!ds:q)/xs›Bm84{הGYO:E; ڠR=82oCK?nfֻܻ/FʣPxIapǃ=B$>!V?NǟǓ@bYX/쇍 l|l,&nl_ƪcy =ZA[v p=u=Һj'A]˹ǡi^U` =aQX3zJx2J{Hu%H%m}xt'ۼ#|.qVXP,FVTkVqK\嬣I8nz]w?DhiA48bKBhe\xGHJfT.oj!EZ7:83' {[x䮴 J͇)_cuRl2#;&(WJo%*}0#W"q)cV3lTtN+}pW|8F#9MQ$O bZ.4"$+D#HPb([rX@ ŴTqxDI &ks*SI-j%CMƈ91r欯ɷoG~ޠYOWe~t&_t6GkC+"9Z@,zhH\;B+\_<սc6+v/nW~Ld|rUwVYjzo땸v]ݬxkRB:^T.;<>RL.qyEU@@* vHe#(.E_s^duQ{ t-"sf.ڮZX*bKct1.-m$Cd! ifupfC>̙ou(1[W>;>.ON!5pr͓C0W0()w~ͲcҎl8U2~H9!%'i %W<8k3DRgwQODwΒz7ZY[eDx<dT2F:47*%\y/Do]bZbo"%6#`}_b _=CvlV8XcВJa rc "#Ub#REpԸV/EV ٗrL%lVFBy%ys`@'_JNUo:rZ阷cHAic ӄHtH:«.}I1q Ǿ["߹\g7iADJlA8]Vr+WȌUE0S,QY9*o+)MWTB3,A 1Ó6$R 5R*9B ^N65JM-չPvX-/Vw>Udw]`{yZDQi8Mu>t"EDz:1T %VB@4D"u"hɡI`^0^') FM$rj[AzpnbLPq8+HT&)KJG'QG5 $q7 /4xf K@wgnڳ'+:G$NHZVi 'Hw ^[ėF$L MYmpQM.%(-h&qIwKGvY:d20Ahj,hHQ$ ̵uoyJV 0v'"_yb8nl9^֓$oeNBA0Aҁ:)q)Z$rF#ko쵾w23BŕVmhѢ!cB= fvCw\UӅ,JM]j>⍛qPV -5WSt 4 WB HLKJf 3~N*G?k$ԃ)X.ķ,WD`@ZyBɩONGɺYLOid:o[Kٛ1@೮Mvr3Ko͟{IZ1=_IUL\G)/E3e/擺.?hnvn'f0oo cx9hkмmB3T.1Xw. Oj~NOR[(9x9ɕb YN4d>R\ᗳ"~qN5ҵ,hvE2]aǝ j#ghޏ6!>:>YlWc6f23h> ~ 9(p+bڱK_3k%~\{Y[ 7{V_gڞ-HiP?q|h*'_EYTmH|bukY?$efRm{?NȹCqP;奋|O9F– as= 9؎k 3=`ڬ#ZÐ' y/#,b~$3AAٜNu<:dMv֫Qm~z |s=iG9?#gf4͏%4Z}r[>^Ce|j} *u-՛ ͿEjT>%##*&; QjXm>;6Ϗr#]MltFɏW|ZP^g ۄӔ1ryE^h|aųN5,DE~Ewt /{ h @B@s 8K# .Z'%kz"~̞[BށvAIG'&0nOThF!r}]@Btʘ`zWmթE͍3 rZ3ʙąU.‹;D.A_J5x.s:mtxry(\D 4TOi%+'£UJȎB\硜%˻8c5T"I ":)t4+OE/L"f !h\l'yuU)#"#KA6 j1qCU?o[~-\i̥ MI AfB%S9컂VD˝MpK)!2@8hsJ5Gp:by3+Q0O:v*M mC؜ՌՂHJ;:YNSW$tK@4*YMAZgQꖔgUWzuyAv! g u4E%8C=;F:$Xt?MYO]كg"_ g9XJz9XAQfH5C2z{?%.x.x,:TQJ 6|RqJe8dyp* )D"2I_ζm_7S)xj<*ud/Sܾă-y,y2MJD$1/آ?n< d3v9/̄~> ʫهə4?]t֕F0!ZP)ē`h0`ouA2\1+-衶TY#H\(Ui_ҥ**VKn z^;Ul{}-9Y];F\N& \~m:h򾰒5>EY޵,Y,?4kz0CyiT;PCšQK5Bd yd2Ž0ep4,c 'b&2F IXO.F:\VURbC[%2(K6-С]l.mO_u[ B,o4.:M,D(e:H\I xJӮ3N0+$#$+(NbiE2YrGmM9gPftIwz+!Zk'n1]^קYl]V[M-hTo՘.w_~KV)( 6pW^)a[%B'סJ#] KdJYnI!P %'LH}v4ײTRm59DܶfZƮ,-P,|#[m̮2^N?fz=UAyv6t6ξqbʄ&HiuL3x}g 4XgC^P?34I&U%iʤ(K:{eLV)#n F`v̢ݤ\TYƶLOj2+.fWZؑۥ6ڽ{g7<%͘6l(WL܁Պ` hQ-!1PZHTl )}e+2 h/AքȼQXpDF/*$̣SMpkra/ 0v]=+G?MRkD4JAP;UWh3HlB[ɷ3Y(AIFW1D kndޣ'-t(xLOO"Vu\.*\gkdW.ꖹ({.\dyh`kwU\eJPY4 RxzYP ?< ysvV?YQC1^Bl\qwn>CO M&M&U$O.&W%FQ,./Z(;VW0nqVL>6Jf/u|[Utxc|_&jR )nĜltyݼەb<x5 غ'iI uǬ6w׎'3ii6]/^晣)5 vS\ue5`ScG4mr֔w s>D6- T;n YcVG9 qto^^_ޜ|W}uK4#o`=õ~\/Y~VkW+ǫ?^isb9^u<":2ayn1E ?ŭ<ٍ*:p6!šFe [?r, mP߿ `;ݡǙ\"gۈw ATYYJ:FV|<6*-=Mb; BW.S-,7{ui~a#CZy>(*'q+_q|N.(ibJi-4e1K8,t*u_kmbx6vMS񐅜oyŹYoGڹiLpEW$׸\pEj. U =WR*\FBNlpEr5WT@JIW(ذ|AU." 3x "\&-\Z뺎+TVf{\8Xe+le6"+T BtWR:D\AFBqI:uBj?H\YȌpEm>".+ T޺:H\n ff\]Ŵj2cjBj=\q\sxg#?]dD}tdDL W( Hƪ$+Rpp%e+lPJLqEj:HFW+eW( W(lpEJu\J׵]P=W`ȹW(W$We Zy\J[W+rPc8$\pEj{":D\+4NU>S$x."ݷHWW zAZiWY:ʣ\XY\jO}`R*ynp>- v}z;2jRTJ>\ * W2>}-?~uYV *(J0p[+JN(i[ ٘$W\LER"47Tt8 WZ&Y7 >;"#z\}3V NNnM+ۂc *@RߚEW A}kZ{Neׂ=wzfH5˭Ԛ}nRW+,k kwvj5gʮ-z\IkWOa68(O'cj[lM>qfaD )ucmrs8I~T跫qd{1^[7n==wGM8NnG5_ Q/Y|&WgaYqgϊ/\v\L~V|QG_Ѽ3(| )_9t~EͩTXZnEGlQZZj*T6jͷͳח3j􄒳7~ -Vu3oS gzwRX3bݰxݰc ٯ,+v]#ى|O@oOQxF@a;bZ` #7X`r f+'ZwWr%EifQUXW$5 #z\ \pEW(k.""/4",fs@<#Wv>.U*q42t;u0H_Wg]>Xm'U9n\r1H%Ժ73OmXjȄ4W 6nϸNVjS:Hm+j窷׹Th0[UC$SN [trsa،0޷o'W\0Mj:I=Bt ?EֶpErWVTZq%rZe+)q HtWkz\= tRBF"Nd+Za+R)m(W(XW$W\pEjmq*W+M6g+Y."NwWR޺:D\M;o` :}̪TG:#\`T;ɵ٬ Z`":D\nVplz[˂&p7MS0g16,J+93ʁ268le>"Ʌl&HgK*m?혊aÐ]5m;4M[UfRi%j+Zwl'l+qs[%W :8+8?PǁUv'7d>(J I.\0Mjuv*1}ڀ; P"\Zy\J`=WpyF"Fe+ku.BXqE*A:@\)jl%XsW$W\pEj:H[UsW`a}uP;RhyV|?{WƑ] JRb+lCzI%AL} kiDU" ݷ=73ᇻn7o޼7o:='wD^j=P3u f}z|vyYmD>!D[2Kq&x]ˎ@S>EۛOy|Gk^/SфOL}<t;|[̓#mY{~.~O;D+# 'G YEnO3p@}i揙'@1лytx:S@6b^5`z6OJPfkkmN7S/=[b UgKx)߼wunHE|C67//{{ |z'>Ⱥ*-QE)h=_Y.E7wQ[ ɔ\HnUK%htZ^g\ȥ4S5i|S_ʌVDž4QC5Jstj! X&Ut(Qom΄1h)EL$?'{j%{(*ݵA3Z Ct%'+>'ѢkMҗ2j*pb)Uf뀬-|˸()wFJ-pϭ? мd0cF4CbW}BR@A-V4jYЂ=̔|y6dڦbҥ逶{І,_ wdK*0H|v 4fèiLu\U~b1]ݓGW@h%葤._зgl!Mu-NQ^dc)C4*ܽOIgUA2P\s9 Sf0j$7.(]cp_^wF8'֤@{ur38:Akc0mƀڄҙܜ`/-}ITp,ֺ[7.m|i"%h,ΚFR-+wr26z=򰡥Qaǜ-rN rSNcG3PQCm>+hW,}ZqDg()f %JJo*]/ Ţˋucm̭lLDIYـ0#Ȍ,Hw0`T$1 V4ozdF)mC *TD@5w˽AAQU6MS` Ü` UufWHiRTL`6(:\d#$ ŨQAQi+JZ26HQc9MF2mv^Eo{jPB]:=n +Z#q2FM[A( `-ePBdBE@i'rը |u@x`1g#~ nQ1uKQ `*u &NpPg"52D5 / `ɦ= p\AQ{Se*8(@Hq`Q ڳ>EE#=dH_(4: qJi0+_CPqj{VQRA}k󙍒 Y̼cI5N_*}[ MJDA e͜`3d5>! Db$ۃB6rZQ8b!z_4yZȠΌ0~Cw@{łTM%^1,4WU)*K;L'Q_r9`xÀ;Oն_yYs/Ny<~`X*f=f~pv]&!jhX 3 b{PT8xiT)̛БW%sJٷUe ìcJPYf1h\ k1i3jFq5R 2j31frrp gid <%2`O~@rXr* P.(7fhosqPD;D;Lw[zP낂0dj*g':RC}C7ob1 ?n?y xMRP'w d!G!ϺCE7 F-)>Ca E5FHKCH45t \:'X:W!Xڨbj#cR=ڂ ڊ[M+5>Pc͚ Rת%_6ϙ3T;*\Aj},ޣ~]ےYo5kPqab2|ěk΁ >bѫ8vAi "Y'XkʛrLpy/FF̆ M F:,`=YJF$=kikJ’ #Itk oh J`_<R- -1˱XTZI`SAH 8iBeN V3f !KkBE9:WϮXD;j0BI>zg&_'([xwcsl6VްMM]qTwv w_yj Ԛ|kByGIR;?yqt ||o[mj9Yh|vfjߤ֧oO7/c3NVnW'^i^*?_x[zVWI۳_8Yƞ}jusw m;\voW)ڸkv-!?_ȝK-bƪWZŵOԅ($> H|@$> H|@$> H|@$> H|@$> H|@$> =WPKqTW/e>{x3;w-9E:*H|@$> H|@$> H|@$> H|@$> H|@$> H|@i[iZbh> FxEA|@$> H|@$> H|@$> H|@$> H|@$> H|@$>wӒAr|@*)}@@H|@DH|@$> H|@$> H|@$> H|@$> H|@$> H|@L|@o;`_x}[Mջv_@kj~U7ۯ8Zm .ȶxblK@ۖ2x-=/;,Rs1tpZ ]1ZOBWϐ ѥ'CW Wǥ'o g.]=C^ӯOg+v ~)thmxt(\YU Ua_]V6?bb<>o/QUWs*vzڞ֫Wo{ak߮j_՘k7]^={?xԅ a6jZ"W7|+"尦{?he_߾! ֖5)f7p5#k/\F#whfǺaFFZfÚoNF݅SM}\覹dõYDc/1ʠDadlW2UeMW_7ߗmOL՗4OЕzGmٵdW9os:}`ٓ՟lxmvٷy?`fmh܉'ʮYrЛg5^MA;B~tu(6D^ M3\OKiF4LAhҴ %3+bɫJ'Iҕ QǸ `c.-OeBWϐ(Eӧ+b pK+FkS+FyG` ]=kQYZ]V9ptUth)>ub ]=C1(DW 7^+J-yKN ]= 1Ebӂ֮r޵q$B,rF6@ ,]35rH!3E4[4kU]]UՕ15+% n/us!)sͲd<OދRq>m|~۲-b;kHE>Ӓ)E :7sW4`i C=#G|2-c1[A>{po{ĺ-ؗRŏ,`3,9bʠþT6/܏z\DJ @μQvRh8IRVޅ7v  Ci\z߆I.|wi[->-P m;Óox&w52oA`Ѓa#n[W۔Ym\0o;3A`ÕY9>1(@T^ɯ8V傡6|Zl hi^,55ib"3`2l'ÜQf YcI"mC zb"HYA~ 6W% F"@f0ȫBK (FO D+jPiA3թ/r 8M0<[Rz;8UwX+"D4}ٝۗ QxVgۏswQ@} x08}.f DB7 y!äj__Kη!L{)@LriZxsτS52`E.@4#p:ܣfDG=FĆk琠#QR`b KJN1b;Ĉg'OH>Fڢ7nVZ^~7(C ]c)<=g}&bHFvh ASHB%hg2DOn"A(Ŝ(a6:#g/PpV$]sfz0'h`8qю`as' K.W-uF nJ&@{1a^ȄZ4rvfwW=Hk]&QGa9I ӌey1oYTH%3 S8sh*25sx;ƘynRԍ4R8 L:-rTvR~qaZSWzLwÁKn e(k4: )c! H*ƕ k#kZH%{/؉a:uxܮn. . 6}1^:rKj.r{Ju)]{.=f$D+DR>BhggeL)HS>"<\OtM<YBa*ϳ_t~΂zâ).`-L{a` ƣ`IԸIZ^zQ\ؿUv_^n:ڇ']OuhQтJS -h)Yf8hh.gE1|-AC9Q2-Âk(`BAA NQ`Hu &\SΥ`&'ý\r9Kn=' Hw'wiI @nmnsoۻ.POjvEs#*5q ZDeҞ I* 5 K 3H#.I:2U^O7g4sR=TE|i >#D3x@7> &=}u<]&]>yqб;RXWE? " PԵe7 '#ԅO\5z<Ȋ_n?\A:&L3'ͻDkm2ɄVQ{4iO)N7[”SZ̆U(WaIq[aRpQ^e2,Ղ‘& ¯~Mσx>_Vi0߶m8kE<|ꎔ?o .Ioe%v*o7׽.?/"fj`y)6Iai  7)QxdC2fs~IutJ9wWu`ԛD'( TT/}gR?͊L˰b:\ KtU]UJ\חnFxGtM RCCbnYǖw7k܇lcVWةz/)emQmkLw?F\!_[7/cWn&j@:W$LfH*3[wh;\ms7`<p;Q$؀H+j$ctЅ_]smM{]S TДAxSRK1U#x\H+L|*\cA:p5X E`  uQ)X\EmjIO8v|URۺPpN||[8HΩWåm-y42n)=]?1ؔ;C٢\ߚz}i"*~NW^M\ѴūM¿g&^HwZ͓~-M6SbdI̽4pp{7[sxQٴ,o; w m#ݷ CڇѢu& fZ>*7:^f|XOtsp?ny9ڧ/6jۻJ.rQHٻ6r$W[Eb0`34VHN[kl˫PtGEUwCiskm;Y O[Տ+Ar٨U4BNǬY?~%O^h݋_|ï/x_w#$w涬w &|rcy_x>|wX_4k\h*T~W͑eX-_Ym\j/fyĕKhH6bGqsn[@ݸ1Ǚ_=%BjK  1 1bU2%NELmͥFm@BR%﹅7qʙvk,Q+"dg[<+%ZPYyur9ƇdKD !LC]c-:n}jYjGW>)P]vAн VYrqmrYax6ƮA%1)_3lj qL;KQk shY(k&5Y#V2d1b.Ϟ>^"TNHmDO)qmN*Dڷ-J(9&Q"2ʍjWv+Q MST:*~W(W[$~VN0M>C>6%)|m}u6 xd4@ӁtU;LFr kC&U kz*̰%-ߨєV'<7c[ZȖI%v:*R}f C\MCѨb)ɛ6ɤRrkB˷3^ViwݡKeoO^R/^+5! VT"Z.=j⭁Zn^om9 WH b56R #B&5Š3QzރYM$أYwmjA| f`Ezi>+atz->ETٿ+7ߞ3pf< Zj85D7nP©y@?Y |GTFрe%$@9,ӐкjdS`kF?-iDz%IKUQ;Akk0\AU9'e!P"8 RM 5zU'M&ZcY_PkIL*9*l@#Y0I09w$ҷ]R=:{udVC VXVG p iqU影P1cj.'!Wd2'ce&q " M+%y(wvnwr:\x78ؕɦu͎?<?ްTnה>y2jw\+)64j,N 0Ngے,B-M֘L/!z"z0&I|_i&GKT+ zjL KAkqL@YT͜;dw*bai,αNpڡg'o2AeypF\Kȫբo!?_~r,$LYaV!*)k9!ɋ*d&,d rًE+Z6Y~u()dC/ʞ:Gn欏\ 6Bߨj5:=0;!.D\.UՅ<:3 &([CFcN`! 2+,:%֔VDWM p 0DkYGR+DncӈYk'N?E5;"kr5di.f!uVǔsm(As`pY wފNbsř$xXJ9 ֛9"v3g}D|ʣN 4٭l]h8[ !3A:CJ6$J2&DXdNK c%pw'7[a% ߖZ|X7y'햝_c2s}ܔ0Y4U.v3_,Z k>fJ`| {)~O~_߲܊}o7#׿,yzOk$gYDb0bYYYbg#dY>}{O{߬QY>h0*08D9vNxxˁYeyn7r\XF16u=h̖?]e ?pNΖqXsv~Xߧ?4[&]pM"ʶaɷ3鍳GogϏ+{J!mMǗ2"œ3Q۱ (A(h>1i1!ƒV!D.Y[;%lY;ٷ.}z R + lRw9PbJB] pI8 PKJS0^TfC֐֥Ow`2Z#q{K 88!^Հe[.V#h۽`+0Z 0mHc6[wPrȐ!9mu ޲q+K»[9\fۯ+7 ֒h!fyM9$&ΏvK[}"*r`8Mcw#8 QYH7=Я?-KpU9 70uӦN-~@zP5;jh㽭%LyW (&VW3V!)k MZKn.K|;7Mُrx2>Vi'1eL  H2TUE`?M5>vX١bf'j|qEAV!vwXT:FpTNR[]uVM(FL}legj!A :%/; ƈY%B"[ {CC[oeWS{Hn b>tbwX?,fC~ݼ &:2Năё8RE*u™SQ(egcaNh3MiyhX&쒯SA껭?pԡ2'TH0 TcH`!-h[jq{oNez˻.f)p=WD/{9ʯ۸{noK' T `Q)UOm=ع`Uu6AELCMơ'Sx:XmIC}".8$L9U&Jc_8X6^ k^wos#o5m(9noߊa9 grxay<\i)~n ʙyZv{?ivޡ gK_]D^rUk^\IN+W}Y/OG&X!D" T)Yd?k?c{ۏ9@ĩEjflxXk)רh+l]lQ/$[g)ֺrl+!ko=zp^޿徤Zvpà#15==*F\ Xk*F }vbT12G>p%jE%\5k zjVZ}G/c|Ӫ8 n)GڼЫg ..̣w_n-ǐS-vP`)@X覰JﰞDml~tpPO[$vlu_?헿~ěf._O/O'A].[oKXQ=f9} k@q0Ugn[3Nڿ4ΕΤqJJ熶ll.`ʩ%[A'T$J$TSB^)R&5qN!dʱw:$;O$lZjtZF}J+3A\kvZ\Uyo,TLdĘw=* .+2yމ2j8 $[9˘Mr~5yԋMMgIrRs'.JR~_pg4&%B7!cz>Qaۦe_8$]b-l[ D%HYan[4vVr[U(bb>'o2r7%,>,*A6WD$eYo.M l2!C@bA$M&&OEl}|̲6&i-y{XY-Vu$nJ>aq"nWi"Fr4"/>HRt8*2x ̈́BTElLJa(:ŭH @:${3f #"ʜ=[ ᥜB..'6%ÆRg`,&W/%ug>_glV48SFixjҤG~(Rԛ<cvݽ _D yd<CD[vrԂ'n4Jd4&v}l5n7grH"HmS9#u`+ΗTrΊYH2q^/|jP_۟f[ jTD{QSy.?5u"C[w/Jl'A-Zb _/&Vų]x7ߍkopOئ+)J(_zeX26Mt XaGk-7lclUg7骛UK NLGʅw0FdsS8!{:myd$'"2eՓ|V/o/H@Ogo}O F8' _쾀'Xj]όqh(vbQUtҘ,VGh$דQ/LWG2y !*c.a=Gϒ szMۢ dth%ld;D2˄əQf,@4):>Iٔ Ji$W>;0}U=?_Hz 2ɲ Bb{&EȄq}!وB'8Nhf'vRpmk7=󕖷Wba}7rZtG7]v.Kutzd,x5t L:e+7ZU"xGɮWQ-amȺWe쵖wuqh@126YluA.K۬eHRJ9,16 ,X`ZhU.*֣嘒1LBA%y4ҥȡarv(1W7WҬ|ABLo9'Z3%e`2%4 ZwEuRs9nO!H)-:΅9GD]T&{ 8'q/ I'ciW~ql B}2PjiBIy פdFg)\V6G;*֫fwcAre<oD;=Ow pdxdyҳdiib6Hdži僾6;֠yp}0 R8Pje 䇪IE7GGoFe V!$J:jpUZۮz^!Jl'LB(E Gds!>C=Q54,E%rZtj9<2?JP" zcW<=y'OC ǦW?y7\njj^֠E'Q^+qӽ}/f#, _N[g֒]\Om^ ii@%V^Oo_}`+eve2&'*/QxMJVl$ CPx0 + %ZoQ |;W27 C28ȚP!F)4B)A(x")Cp7jl9χ&ovoߪrC2;un jd17Ώ6AW$ >vh/]E('n bGAxkj`)(^is5&`e:mDyw:M=]/CB+(vެy%~chr**.~5M Y mʐ'9t2݌^}m ̬+rXyyLWu3MR+m$GEnG ݽ 0B-T`S)[>tN[n[VRT$3"fIut5yu:akk!3I%DV%Ht)BCQ qar*1uCzSAi}kT۬#kJ1uva[ pPo?PJa>U}Ar ] 9K "JjJ{D2~%ߣTxVs|tƠP'D>hJ)52Ep48\BeT24*Sr|ԂC wr~ ԋf4z2~CܕԻ[d QKK-X"bQhI˞gk[DZU<à{R ^iVAJ.DEX$d VFb1C3 D&)gY̛m#<#$vq3>ݭ uUA-6wp tNF%+ . hVf n x:?Kdza<7Cs*ޢ7Iin*Bv**OӴ閎씆(g>XRjRJ[Aڛ>5|,Ml>L6Սnr۬IKn`|[`lM`JEHbKus߈u)5ѿK V:&džbGly@amf[<^vیV] D뱙 ^ƭnuUͽSlKW<ޝ.`2=N}׶#G, Pjz_Wmڤ&oc]f 5YEuSۆ/Z6]Yw Y8U.}QvK)o6/ڻ\;t7Aaeho:ڂ`îs2hd%.ƒjlzz)[KGr%N^XNAqsu, -)1.\$4}0_/Ps U%?k-|H{PCPyξ&'7c͐w,tWRթFm^ f'-4ZLUϪ~ +) s>V(Cԁzd5iHngۤ?>sCtst( K5BR4Y?@!tZ#yU`Uq8GU)a޳y\{wX}vNV)b&%2WxsGr&.ӕrJw{a*fOltvGCϋ&oz9O6|h4ph&3j k[_ cֈOpLJј*.c1UZn{3ʹZ J<sU{J{T͕Q":sUXU֊C7WUʕMo^J5.6v0YUK|R1~̴ͺL7”75}nVYЛKsIcPyO/au{ols]]dC믐k3wsh9j(u<\ >D1;RH8 +ae3TeRVmOeUg{ 'L"@b Z51\+Cx*<iF ;a' ͮXg@W_]zwC.S+)>3 ԛ?" &-;ťG|z>O-f,~yt*:S2y)xSOX8z#X8g}*Z 2hOP EILF/rPP ^2IgK jAdEٙ"T:yx)Dƃ7NZ7BqfgBCB,B48YZniC4u٦byO U'&̧=T}z\%S>  ZE8|*VQχxp޽n.~ ,L58`sndF!^u{ ؠkt<'%@ʤ0j L/]6gE"ǀ\UHч@-BDw:ثLJ\&9یmy?xm;l r}_ojͷYvu x2olK:[_rh *m/gI ?&veښ-ZwH ѼA6ulW47zlɲ畖a:ooyu;Ee͒m:Oh{_m[:nˣ\Vdޗn~[Tsy˦;gq{]]9/)~ɤ)>AL5=ȕ]ˇrx 1@Ci}:/]$^Z3@a#5 = 4V ٻ?[;? gULR) 8˕5%0I ,,tE7B"r w //$57pL*EM"gPؔ:O;g]k$nY z[bGƔaZ[ۥAko[٧Fc ?7ԮTݶ8h?Hs%Vdtv-pdC=mh$_@|٧Q`v[SS,]9vb=~(:9 P6Zm׳r7=p\׀4rG9+f3Oa|z=\4 rc60kN]iv'Wp{2ַ-Qpօ~m ZYhmq>ҺP.|d:dgN\\ˀ w4bx \g% p s4CC3\0(|nuMhw+XL<ηIɊtoV,m_2ƮO~B*V-ײi]=vݵ/Xif-~J[dt ]ÕQ&ZMI(tCg)#ʄ&c%XW/Se֠ftv/'rM-}y>GKVMm†Lĝ.mq|ͩ$Aun5&pTtN.1.J-pfL }(EPL{Wks?``Dqy);0.!벴|{].'fi<$MY⍴R6e6*@?9&g_F?H}YF?HгT0tZw,"C7 H #:VBʓ1>hUP&IzHv\?~S"Q+mVV>} 5M:292˴4s$>1Jg%A . -7y8ȣĝ $#;'AٻHOMm}xXAmՇǠi.0TǼy߉JfIAF#mlNk1:x o=<[/MN"5T/."XYXPLLg'h,O4p΢n 3x!08ڲV wAEYUgxƛ~uydBPa~W/Zޯ_A9u3>,eRt;I*R#gBx`2jJL2]oAEV;gd^6ٲC])o2@&C51hLϦv/j|OY\°E}"Kl|̂{8.h,E@P$uQ!:%ʠ)22Xb,)b<f>9(2 LCD4. ae6e]䖀_\G5¿*P Z汧LRN?;w/fhwlIό'JqI4 f9D%2fyy1AQ#o`eFы`Ki֩m>Pᇌn; ӳ'5鉵JGHDJX# |pw:Ԭv6m^ǷuL9I'Go b!Ar1WS:Nu~),hM ߂C)iBFt΀2hePXkqYu:}q_/9?'=Bɩw o n&cA=4Rdq7j zK֚Wo!)-cozR\p|Bv!fgOQ6F7,4G"^Y ]E꽂}c>A"2y N":U.T!^3.}b6)!6Ji¸S([Eg].b(c뵶o/jn{y+J{O5 A0emWd>ZI3KR̀<}漿 EyÛ=R_z2wۇJHDg NPc#-L$M^e㝱}Rg˓mY'Y*(?::g~|z԰\y6Cۤ6NELb"L~H2;Ѹb.$UeFS){118bqVrf9\x)D m,E[`jjV?ͶҪnޅ=6/ %gڑp]l!+NpwѧCЉf1okW!2)#ؠ5A5g&1ҙkeȱG?ߍ;  2ӌ*5BtN[|З''3NYFa5LgzvGÞ9׿Y"^!H.;[sr],m46- @!Zjg1ew5]4n编˳[⪵5 彷VV¹W.@Я~)),2/=<46<ڼ?}B#—g҇NzQĭ(B \}q-L>MN)jVaKs-t"-k>O<+tELŏ3zWlf̱ggo>!SD7ewt?+ j&˖VX/.]M[U=> oHW.0>T]Zsfԧ`73n~g (5?9u+nǖxCG0jFW34Ws#ŧspʤ(|6YCcR;wdZpK_=YEt6/r=^o[7φu~ϯVA?;Z&ZiL02Z!G/a>2\ܞŋwe藫ޚs痯NfFIq7|;ggX.^l<_R (W?}w8uj=;׿~(e_$Ʋ/dz}#6dj/K9۱6d]wQV>zeD2ty]ί.64 nĢi_M*dtzH]m(ڢb85=w{}X=rcSxsd-uneŎ1^s_V>>.[w72E1hjsguȯچ`,!B&@H@<;(W1"Tƕ:+aQJ}J ܰo \4KQBrP/\b̺4J$ "&vtzU^_7jWM7-yJ7MK_W +h ڟV+cd4UՔN^-uvqvUrѼX55Zo83F(N^&nk~qCPXӌcL?zĭT*@qrRyҒ;BeHB' 8ŲRpU=5@v%P YQI\'$x4 rw{ʇPcH-3&Eĕ(;4Q^K)B)$ 2`1hIYXdBZdju < ,MhBv' >~W6n`aE1T{3w>VK2d, }WWt6:g*@PΌFI4hH5Bq?#evl({C}^?> 8ƲWE[zX<¼+*@~m׀'6'^?nػ;㤨*(.2ߔSCoU{* z2dxh~EIU dX4ι{Y;',{wN=ycjS]F _ Z>OP]Ǿ)kPT$odۘbt9ϛSuoI|76v$okۛuzzঝykqWoժaa\9fys3R:úMfAt1{W/d42ӗ\5wUBH8k G>?:"z>m#$ydU=l ߻'GϾ{:~hW#305J$>&h:߫:x*]ͧW]osqbuU ^WQW~Ea5|;mjr}BP+ky_:gI.-pQ]bnGDlmس7DrF(ORb賓M:G!`EߙD_۷0}e=a#ы(D>!ur|Y<#rEehbFl1tG8UĆJm+ؼ0Ʀ9{C:w^:,U]DpM/QB &PtZɒ{g+v} QElD˳qXq4$4@N'32'%|BpIP'k-I&XsU+P(I!F RѶݶE9 Q̝Qzi &F` W9ܴH!<K(1uW8u#*d; XƸ )Q\:94 8G%E]V֥ğl B҂Hk)-rfߗWyˎNkOыƱ-YI(]lk83o^zgb fx@5\ɆF$~\ʎ0]բNLnFq-|OqM}RG;jtKr\n,>6z=~倻9ѿ6^z-, z(*SؾLž3W:gI_GW^ 8u1NT 1+: ;zгg{~|2l#HPNFt{Q{ᚎ8}HsQjaRċb% RtlRxÕ _L{o=ۏI}#޹2J$F (̊hX"siEnc7`*!71(#T't\tw1\F`*fr8 3TuP񛁊fͥ7RO.E0{bq8rZq8j9G3Gy2xT̖~c1 ] exZ-Tg$|zXɠ TS‹'Sًb4Է>.;D])@h!D~}{JHIHL#TIBšib:SiPL3M$`*١LP.2WQ\q 13`UV}W]=Kq>W.]ΟmՋk;JŊࣜzr"^, ? Udx9`Γ5%J6zV +){hvmc4eTWqyT,-1]R%#1)gAHBn}7vnnK{ld? )ݹ^9L`$85ǿ(nɾXΆGn?ubqj-O0'Ib8.-Ns@\,ּK\c%I!80 u~B$_#\5F^nj4.~R͖v$ܗZ:e- : IiJ]xEHr+^*"D|U.e 빱1©I@eBe <#,O9P'rjk|9ڣذWz1ɗy-\Rq/\kVUUr%P1@#34WJ,3L ːGU s,P6Z#L^ ?/A,_,qflv}"URu YJTrR fC F!.pU" "Bu,ms7Uޙtu77 Y4pZrV*VWV>zl>ȳkU;w޺`>mor Z30s#K;>8+pڵbǨ -U)Hä!-<)&0p/3>^.I|;ڽ1QC6jj0[3΄y"F3 } d1@J? [j!sB:r (- iibZeF7\j#ZE [5#Bk)2* F:7UKJup4lGeL꒯pO|1!ifGM%T.U 6 ЬجxOAhɤ041 %EBT+@{,M€{ՠ=q \|➗%_;멌.͹9,Ly(YHb=T.vY' T Fj7UǍ=::Yu%T2Ed}UӥRC4I9C|C,CƁN{12ʄu6M$F$FC.Y.l ڰۍôنZeB}Ԃ* A*n;TtSӆTZ]JlsJJ khy:%* *Ҋ-q %V뤯Pr*8֧qZAf^޺/HEA=)_bQ:;D-|ϫ} oqeZjگ ꘍ޫȃ$S=G.arg8$RqLp/7GX۟0}\JtV>E:\g!# ANE+u!i0ʙD`F) IHVp :\ P)D&( tt9ydaJwCnI=)~#kԥϏ[ʲx1M-ղ*OmD•aPؤ&%YaQReK ޵q$ۿo x~?l8E`?k(PN>H3-(43쩙:]Us >+tt&&ӴKcv z!SfF&SȄ3NɬL.$!BB*N2#Ov'^1#w㋷z3! ^  GHZrE>.IONJ TF\2 7{f;zm4Ս3֛ 8ghPՎqbd Y_ŀv4p竴*l%Wk +Y<=QXؓɸOmV{N7Yq.7Gܭ~<w5 (kRMW0LtDG? CNB nU]mpyYoocV JOqR^Q~h+yŠ/z÷s贰65Gۨ`\Pa#cګ䥲W_d4hW{>mi?ǝ&?LK%uWoëkZ~u;?/z?ѵgRg{s>^}Y2^OS{wx~q{</R)]8.qM/PXd]l1:ol#(#i涿OFk5XEM͗aª擫ÛOoOq1Y3ʙT,zBw ˻_K Ăg<]a2$ȉ[C2O)R6ZH2a'*W4ISƖu[ot*Ǫg( oJ]=;:'Z7⳵}ބ*Vf.J6285QР9Q<`ُ2{,-¤619&\LO7HEN'v"*c % PZl GZK3rŜZԍ]s#( H| yBh02Qe`WggoMFi^aK}qCMv+EvJ~A}v1}vIVԗgH+-R* ~>}4v0 :.x?֨LN&je%H=̺((dF@vz ߭خM俶~Ya ÑA1I>}B<0YGcǨȪF1 ctVKM =W )g)y Id22xvjlzw ¿}[<( -TzT,JL3>'E6LY+G$aꦹP1&Y:;Ik^~#aiT s ]9lYoxR¨f- )uC_ ]lKא;BN']Ug.8!2ͻ:*IP UQyr:9)$#(mRl2ޓ⃕>h˥ڊlۺ]lN-,-xs)K.z8mSQLp#ZL U*հfl2=>bEWf;e M F +GlSg0;`q4KB w)"}H`5fnTVEldV4{! &cRؔШdO6SlMʣXWhմcKּ.jkG3,0f1H\0^IQ(jHVFHTݚBY :ErEQ4H5 !Ǒ06N (FjSX~ܨ b5+/~j<=PfPP(MCHL N^gbL'@$PH\p&?Il(`q2"Vg="v#b(KjZ-.ʸ(==d'_Ǩ|tsVYGGnI霣)3ŝkiǶxX9ߩvx:ЬQ7G~dvGדd!k xaŌprxfG i3 kT!.դǤ_|K ;D (uNVa"?z -2& IȽV(oL>g DZhC Z,Ie)I.\pQJ&,<'Rm8hCD_{_$CCS vt1\o{e^N3đ,շV<# tCnj&I@BҙΖ0JqEYJH vNkT&Qų jh*_uk(ߚŗ ^&ZfP f *;4xg9dH],vBONHqOp5.膙6֍!,^uujG3Q4D~ NF1є㙕T0IԿ^?Դ!qFA$ 80ZT)pNXH0sJ١ɤ=`D@YAEƩEaW28;(e{pJ:_ڒے i'ÒJZ,u] ;OGCIo/ @p0u/g٩&í;gYw0L3얣Ndz$`c㊓q|S@Ҕe]](Tx<^"z]a݈ן(#EfiaŇŰdK;Io'HkJrPvvD7nc jRl#?\~=\"kI]!h<,9kDyyr]2vԇ:QymYd٘K&RP%8%u ~~x7)>?~_SV?o..v9xݭCNj_ӷ Ar:{Qޫ\)_Ci5-2]7+o{0[R` M]eƸ걵r޳d򹻎]Jlman3-%}HFA)0, #EL $)vZ>wna}Mg=ޥ‚#4A2i |T" -sAP1_ySĖ|믣kھ0ƶ/rvt[]h;U{ӰtX:Wx3pJbQ>qO&:k 1PܧWv̎vخߪ#1dYvbar0te ١`AmЀZ0V)O<֭ͲOAqJ+ Kg9!:49eHNL׮]K JsëhЇeg@ƓwόJdzɔch L#iRޫ2wYX7DfrH8nw]쎥UBs"DpѸpspK@tB. ?>Cr(]PrRQD'li2mbsBSe y)vm 0.&)BDHvA@xV =]noc;dcA֞ux[,ɠ}:ҁxXU=a@eJBORLkdQ>ơ >;>R\(Vb Ice %*;YFfeJ{ܺ_2˭`>{``&4eʒ,ɮă}i/GQ9:14J츍̔ xLIVe湐Y>PDn[X-sv̩_$tyaե,Ae](QMd+T=T]0 :1 9nFVB,JLUA&bRœB5Zá녒ڈdq- rnr(LZ\)KMZˇ?_M OUOWnO{xaVF4 Rݠ;6r n_JIvT:/C뷿ͦB,I,On{ռ(%xJn~K}۹ܭLi;*.}(ݑezٝМ5Vt3B2d)I *I?x*7?7==S/GyL@|!I?bTdw͟?\E,E~|3n_2.ѧ zs'/%azI_~ˬ5G~? {V.Iib4љhYO~3ڱb<7ta\Y gBk|>[7CEW沓&/yZ~9r[Gͧl:oO~i9̾`QI˰78[·LǗXDԮ(=967_sSnx^SJ!HwKd:c'W Ȥ5Ҹ&xH@ꠐɣp/nY4,d_E'WE#P`51e1LQnļs6QX$omTDlT3wiǮJb ٜQg[Vx a־vͦű9)"㣡uw=|8i"N.ɟ6ThB m3%B*H1Nu9PmlhZ:Η^`Mm~,>6wkGYoڵG/=WE Nns: ꍫCl|O_yz1++#15Zx7%7^?4N'}[m+k N\?֠ƅRwQ/kܸmA"¨y||-'m--+Ŷ"3!- ѐJ_Suol,A&*C 9A(l$  CPp0+0FKނT9 q+7 DwZJ1 Y#id0ܲE&xI )$ m_rm͜a(m$B+)m3k9?9zŸN=o-س[)IDJ< Āf[@36ą֐tzV'o+w^Ixښ&<~_.Ru %zV~s&~cƊ9ZXN] SQcAFsD2>LI6^?Ri{_g7vL3{i}2?~1x] @,d(2LTh[3@NPx*p% L50@|WQ$ H3e_26Rc<`l>w}6L ?|Z hep50t a^VKyJk+{-0pHT翬<5guW4WȳgZkJiӿwjR*j)|R@H{g5;dza)h )' RR_~CgOF|9H-7F7YfԊx4 T_.MkA^.V.Ӭni:+rWh]:|v ZUV_<,V +4,)\BsZ}Zi@z3 wjƯ]GpzB+;+\zU1WkqWZ#.]+/lGӸ+ŵQlwUeZUVKwWJd-+- z Zk)twUT7LI|M;`m]sո+V1uX)A74@~d H@@2˨ tH65X Fz"nk]Wwh?kXu5Z=2l`&YBFg}2@rDngf'~%? ]? fo=+]yeqj.>멡@B@U@3S5AW'ݸ (I2!V :JOFph ^h8gJj#ř4(˹Aã0AjksYx|H*[/]6VaNɥdoM)z`༷ 1iVg&rDY\ S[3g7Lm`7@hŠBK|G^p"gw97FccwG OUOWM6鳽jڨq \B۫.ޛ C.o`,MtRLJo_goy뷿ͦB,I$zyQxw{ԼTrx[=%mg{y_wT&dfp)'gDb&RƹLmL5sdO|Q9&Ӳ :q (uݑV6T!PgTFq%-Kc z(> cNV&( ^86YS9ɘX-DKv7> e B";Dyt̫#hmBݕtI؞P/th 7=hoqTu]d }v{|:mD)*Z[&'Lapƥ.JXZ9AIeb2>;!]R僐KKqN/76w/L*&.U#b9Zeך9{7}uL}PEJ=ocWli(U#\.s2I%hɩGBxfR[,W2eݭajLYEps:+ Jy8QM&dgkH`0KʣXBCj`g'$N3>>\YN KiTgT w>`;g|fiϳ˒dϐdUגgemGC XE zfӫ@=j* ި9MD@h4IJC%7 HŹ5!Qڗ<LլlEz776ikG`ۥ[D}<5#d ߖrOw"XڙtwdMB6&U\,{AD ˂x]Ҝ^ԛZ&/^o9ic@ÓUY3l4j!r)z _d- q~kȰF\kb=Y~Z^`NoXRuykL\@2s`N3uB SN]0(| n9z2*@{^A4wXr:wSMZer\k/R뙸6Y`M06hDIFt@| TٱNosQU]R 8ck/;w4v*r^Yqci]]8{1KC\) #t^S.{ gbx'|#djRVh+9EV%y#,v>,il=iG;TJqKEou~a<)nNH8C-zmYŧ_n?ډ+.Cq8|Aɹ4b%w^ݵdN`c$FZ⥷$s="u\8MM-IAo$I_f0ƮvV7aW򈔹Hr[k:H(JmTeDtR+eDY ("LD@KZy8|X$wʺ[]\[CmkQ˸d36Jr2PT#H@$(S<(l!nQۥzv{-ϋ17iz=N VD˅^>n6Odx|˻/?|n۝rdt8 &~ozrxNG7M7~ a_gś*}VDFmt8=|p6efo1'ŝ GXʃK/kүn[VX| r(0|ퟳtRK@9o~,B{.ˬ֯KBW݄$#-fv%λp:,D ]]'BX^Qwa.<+?L?!ſE(Vw$Yl~LRfp{ #jخoqtW;6wyýic n曆LR~8nK(vLPn^dy3lަl۔5";#wq@my J!Gd4.%(5i H+/?;TyED: =va~v$H=m,vWw&UHz BT) h;x?AhQA#xmm;׋ɠyA؊νƆ뷰=z ^WmqU6Π'a?9lA5G+Yz`A]uDzH\@`݃]փ>ɖ׼؈<̐my!-Gfd38aj 24:rH2\3a&P:iH^i=6H\4u1ƉwϹIqqQ[%W*0퍜5 .Dnṋ3%0%gtooQ=rMp[\ѝ㢳rX!2`@5wQ&;Qe&*-H0)A@$ ^$]0\q^ WvùeyX L+`ZK0L6T82X)(P@q%@qnq_*C!tۉp%걅ug1mxt`DWpe@SK%K).:d8k-M ԱH 8 9A%'.F\ŇPk`'H= &?+STw$Rԅ ~:N1: z$cRcGEdPնE,pGe eӢQÓq,&cۭQdb:G?EßŋU]p:qN(PZYeu姅.~FViTP-vqg/gW0|Yx:^%G5Y7`ǜaѣ?bahї,A.x[FFYS89(o(!>Gx+o-10o+4)MplV 7;'*O@^]=jлQ6();R\Y™]s\`dͽ''8" K JԻ r07SAxaj]v.G׏VFyue2 | b6z劒;NKVYfa87TG/_e^cϏ_s4#30+w M 6%V+2Oa268R9[!w9sYC<JG M(q cQ=,/ՙ&LWT?{RprgOW<BWwmߏJ,y>[GjѓF,e9K, Z>W'5Dx!/WiBw6M/nz e$+fJXAT=Zλ@M;@ڻE]e|pE}輩Ws 8SW 7-U&9,?=~aIT;34$F 9F1BA`jad<.pCZ~(BTC2`]ЁɜH(i@JA$IpiI6:AI kU.*- KC$Ec#dHў}ȹGmAu-1< *(%,` rT,QR"!(֊vEGF;O4v MlhcRDm &\1>v=9ZNF<(BY7Im|&Cyt>h^""1\N] VTn,W 2?ɔF- I PqP;r{|&&-;;#"~{7X[MA݋GڽІQ{8p4d-wlY\V?' )scwz?-kwd~ajlV;Sl` vJ<ɵhXuRSέ,iǽ2cPLL2T=L0Rd=C;P@!*T PI{#g moJo7Tu5?ϖqGfNSS^5y7M wE. ⌂d)Hd)i>0l4M:*9ѽjlnx8 ssj6T"jfNv lgM$hӳ5㊋-Z7PkKگ֖dwc|4%E&ECkFWan U*>S_[%i:| )kkL^r\J;w.l|LyFUKyO9xv8H;*! *d8C+ Ա5aؽΤ\z<}QጰƵh\"$YزoN(@-\JD,\At%30J+Z̻ ќݣTg2 {i)I` #hr'_k .^8xM:%?2y_uHt cֻt6H=9'?Ww<~o?uM{ϑO:ri1ۧn{4O?^DBpa]\v {nop_C3ۮg+ ;f&\~ip58 ֎+o:C\Ag+ip52ςq5TzpurTσ+R[h\APIfsW+5ڙp5yp54hq5TnY*&7<* r#M㮆Zg֎҇ Wg+ [?ܼ=|?߂WۏG:N~};Kʇp߶7þ87ˏ}A-o]زhދ _?y_ms> hL9{ 3睤w1վ#c교ȋټ!8DŐ,6jsۼfEL'̃!Ms,PV,P~;'=2JqHS_&WN|Z"\-SsU6\={#[x; 'wۯC`අ(ov$ƽv?wWnn/K k޾^;ZYGQ_v[uNi"LC0 \'`z ~*0}&U v"\ApWCj wCڊ W_WhVPOz~e<]$8S?L=3pLeXfWC=x|eKX`k(.Mxz-9v"LCpy0=:3 @kPnb(Ӊp59yL.Yp5:^;Js3&ipLSCmwWPq%p50 7 ZZ*o:C\91Ez)p,jŬWCږ&l""хp}!{K.SwWCe܊sUpL 4ruOMX;J WVp"jȕi] ^֎rmn2 ?Vc5I-jZ]D%4?MݶfՇc-Nѽo'':bYz|%&lf҄EryL/ROE*Y6Ln݆= fW,jȕ8 ڰz\A%ņ3% Qv\ ?e֎+tq%p"7d];/]ˋp5zv\ 6\!S+جm``w5ԞYe*vԆ/+OnqR\AWCuj*V #93ݻ` v\ :M1}Uq5Tp W_WJL UFg1b׎+,Wk<=q]1Uinw'ؾmEa/'r+Dc>-(WGQŬF>n90 8/|F|>'f|d0?} dNw_?Ci,X\AAn~ݻ~2;9:ǖHkI<z嚜Qȕ'[i%cAKduc=ךCXmlj m*&Uސ#RΕJe L<_cl+.̧;yatV 0cN:KqAg(umImNj%5⏩6ZCjckĹ F"Qc{9E}E*Ȳ4˻oP,\؇&H$[Nʵ 5+cĜDh1=XzLB؉E4bvW|Dp.{ptpEi{)j{/HD[D~M5 9f2j [a!F4QEtJds*`?ɏH^5Ƭrjz=$ m ?m}1D( $<qcwgBmsyJ& -Xnj(ϠdyYbwPOB(R>>%1 ´ZYp4lhQMD u}G61yXuoZN$mѕ si3,)\@55/d욯\ |&FSXG/[>j^,\ ڂ>2fBmOCŢ֔!(2>Ճb :=kBȎўU$PKͶ#oG%eOhB|H:Q@c[ q+EGgx7Z [s;895 NTV\bJ2hآ.-]`kBhcnuї|H,&޲qk9aX +3\To27- 0B؄X3ǍYlXkٌm1*\4D@Ѣ.Ԏ]SpPBh'rH,~/5SlG?[ЪtWpC" b@PT  ʄW{ŕb1VE%vp V|]QcQB29(ֲ@*sNko?೭ K[ `Q S =X ""..w^2fy)n͙ zP,,8:M;FcDJhSI(%J wDP% b~$!`FJ~OC $!t_˪61+-H#3xx_.N5~d}2 yQjyL ¶VAb;~X,Elm%EN'K3jOeLLobM=JHc Cwp66XGsP6D2D b֎FP]G]K0SQX=jpĐ*Z\1;Vnazb iq% {H Ӧ30b#hkmɲE6z?4f=@>,1|Ic@ȶEJ,Iz>R,ǖcjKdԃ-ÒẼBwD"(*!.h]`1( L*ձd@uPK CDdU 1\,"*Bd+Jp8hc]8QǤMm|զ4V ^[ِ)GhDfuI[ ԤdZi[ x"nw^0 %0l}A WAK%L0Alj m`=A; ns:z1)O`Y9-Ǵ|l8QdRZJN n=tUtlK&z H[ e,Y:jm4kY+zZ(U[K)-=4Z; ƤD/woWR$ vM3`] ɡm*c0#Kwh}"z7%fl,nw]c;(XVH :Ш:V-HOOdPJuQ~g= --X!A|Ǯ[`ż08iR#S+ FA5ng gpڹy *  SVB) j cGQ0kzA,+0(HڈJɠhc@sjomӺƝ[ [HkTAlHm%_zfxY1PkB'еJ/wӒvlThSBT|Z҃7A6I4Za-"mP Zx7XA^oS0h–f|іB{~ZAS1V`3G^cOV (DRMГFn5*E!nކ`^lʪપ5ri7rc%_Rܰ#ƍE]0 Th8uOS>#8}1E4q, "!$~1˛ e棈JRF8)u9R>Ǧ:ߎVU:2$5dd*:͛*c }@b> }@b> }@b> }@b> }@b> }@bhI0R|}$RgZ=GB=> }@b> }@b> }@b> }@b> }@b> }@^}@^ r!xc|@T6> 5j> Q0 }@b> }@b> }@b> }@b> }@b> }@bt> H!%s77P\J}?> @;}> }@b> }@b> }@b> }@b> }@b> }@>> ߴX~x=TSu\l֯ozMQ=-{}5\ppR/^ۖz0:m(alKP7,ےa[rl[ztGg|u[S¿4 uᑆX`̎G?m&L!fU)#aVq~ϗC&f}-$Α#RiރEُD|\F4 ikd.4Mh:MʭAČJ) pFUbt.z#+J]*"NBW6Pz2fΙl pw+Ntute z>'uE_EW+E6kW~P:1䤮Bl>'nt2ʛ CNKΆ~0oe:LuS0No^\ h2^:J2l-[^,'.Lt>ZT?ucRgbNPǨ :*u_̧o UVƴ=u?>M9x2o*DkǵBn^<1?o'uzU#;F{7*˿¶(4majVh(|\N!iIlXNiΚm@Ma%S,O'^}ݶ1y+hmTaHEuM7u>~FeI*"Hp]6 mPC25]OBU/R:E\eKW-C+t讏y5`'U?Z=!xQN[:+ψ 44a4 rh1M? M+W<X|F ]уWr+xJ{-Έ "׋ 3t"c+]5!"6~CPFc+d1#"!uEpυ Jf:Fr"!uV\j5t"V2]!]d:'"15BW~#Pn陮W۩Nn)&/]<ԋ"Y}{t{,>Ve?"%!m|&H1`*i_1'uuܸ6=K؊Bk]ej %b,T+cX'ZmF2|d'htCyR9yG(h?Sj춢Ά5Ϩ 3]}7tD>_a =V->ޚ[ :8M3I~4m8a?C34J@TDWφUntE(d:BҘȈ!d6tEpe6tEhPZtutefDWlT̆w։JgTȈχnЕM ^]JQʌ; ]\M0Hh:]N1]]y4Έ&$\J+x*(ΈrCWך\z3tJ8˰l9 ZzV,uGx&tDx&ˍgzzm)r>< ѻbTdAF*\MdKh]Ot rOCCWPUAW]ڊlA, 돥:Vzxvˌh42&^ epLGHJbFtE; ]\s+BPtutv&dDW8森%ILWGHWx}Ft!#(M.tEhABi,ҕR)]࠲+{wKBe:]ʭ}N|>Z/vE(g:B6ꬖ ]\_" #]')#""CbtȨfGIW^S|L?f>ƮǃTR)2:Hਲ਼yE62J3tG(,2yZ<8^~5^t:,]Dk){OnX `Zt"IOQ@E:KJ%~r_tEp*VJ)\N10C[U*BWɡ#]`MPk+L.tEh:]JMN ]\r+BkP:tut"fDWo 7`к+B(GmNVH ]\MntE(b:B6lcү^:f}VŌ񜝞:mΛ~ &e˗q$u/1ъE,C(P3Mx-*yڼ9zt'V;悿S7^ՠ^r3XN'4i!]½ׯt1]_緮wV趼ȑcAx셶~:hufw5Wlu˿nJ'||zj(mX]'';%Sv&zxMiʐQIu'pnpQ~{ \4Ej.OҜCT֍YAZ*(b#d)[]5(bLNSm6m%}jTY&=˪}g6_xh֘o$ڿyݩIg0n0~YO/5Ob_٦MnT RxMoU8_L.oMF݁E!6X(ł BmvٜO'+xUmPR,G+QîwtN٢}V9"ʼ~ ( DF i!F)E6 ig >g(v iQxU9vWݍ(o}\G=F'˳Ŭ/a<isoۊ 7 xu?(nuȮ{sn2+Bt-ݹd6Ymtzg:kV(DLS7Th:'ɚFϚhH't2<ܱ u,xoׯlq[#arOX{#Ļ\}+ʽ>0ci;UՑG}|oQ_w8_nV}#8ܚhoć'[ F%V~վ֍N9VZE.)uQiUN &b)Vsg[5Q R`5ƺնN>RAwVVV:$[lc4몹ծNI& LLPQVtuqSIkTkՖF|{xj~jn/ 9Y,6Z{zݗ7NeQغ"jjW祮bm(i|#mc%ͻVoɓumIwCŘ {#Cr|sFߤ|F*~x!c8 (we͍Iq7#HEuw۱3:H.IaAT $EJ E" }UYY23,$U*1s榟2~4)GK~:olw|S>کdq#I"v&V ؔAkCl et&EjĐ["N1,ET.Ht)/ Y!Y! } kؙ81"yjCX➶Dͪx ﺝ}.7L? }xԢd9˙m+kjݼ ̈́Z5rz5?=<TLob^l>@Up&ʀDhHm*mp6;ǙG™JqffŬ8^A޻٣Rب"(l%g917;5LN`٬r}*`6 C>08J#~ ]fkqLBNpUg>5Th?P &CKU mxRUͯ?wS^JUA b1-5BS~:LCbXTY(A+AD]k6\JZeacJ>=rEf1\Hyҡx;0h_Dd:?JaM;:n 9Ύ{w]e^Y)I33$ Hz 9_gn=7?77*v]xG]wɌjztt8;kЁeZTD١r,'< N܏Ikٙ$Qi-SCk}+a:2svQ)EsQ9 y@X;gtМ&yNe܃02,1/[]@LO "9+ϰe3ٓigD9JNtL_H6O冣CWdi%@̡Ȍe y1/s}b#s<؉v׺?-OyCYv϶L-;wȎfoU#htt!T57}v#%$0]PgœCϷRgdϷуI2zB9",8 ZiIr(tiB]J\%zp Yg*阄BC'sDcי8{7}qLudHPaxJ~-%4C-\. I'e%@#YBCȤ.S{ۥs6Uzc<E2Ljcڠ>ۮ;R:8/c_@I쟭YI sr<ip !fT6rcP&ԩuo4Og4g>gr<JfHQqYJle] $0QW6VD:b2^ꩿܦ=P&Ms]N='?yՂ lO]q≫?JZOT\qR }^U #j3 X,{Tf_ ]JXo=ߊH_/QD==k|3~~f5 ZҞncx Ph{DH NEf#OQH@~rPWZYc΋eu`P})ӖeKQgr@hReeJs5QZƗ:YI^n7^H3'{.ݲ'rFݑizo}m-i><Dk`ߏ6r7ڍReBeEIZBJ=L:UjAwnE||go;1 5lRC90>~{bv4شzx!i޿J}BҌ" bR9 =4<ج  .&4&xp83p w _c@H*V$92& DThp~N>VXՑ<=#@n/S^!n8㷥;"[S#iP&!/bT>iQgIscȔIa͔3sr=\Oy^!mSXTL+dt2;d.Ģ.lyڞC p]%~b3Ğ[=˙X(@%dƣKf uoYY?t/^F_&h}G]B= u~2 ^‰n}fp/}J V1t`qeUi\LHaF[rQԻǻC2t6 .m__ç(\͹)ۍ"OhH.B<]#D.,afAA+KHpyvy3':ZP=ՙWr xYØ4+!@nDw9; jU;}¬滂f-!WU"}yUb1Z"Z4Ȓ߳^X# ~n ⸖S/WOq8+Qk_!,?+ 꼯[>oհ_[%UDo*pV6AL!D`Ni:zu~"5m[cop=̠ ]"QCtד읚f;ls;]*d:|x|"ѰѲ`jZVUQ6=l3[]~>mr`ţuIi"D&СB_Q' sZ#v"^?AH\Y9&rX1ingˋGhQwcY!00a l"v.c0ސdA buy=p;Ӎ 5.:ƅg8gOah6";kBoƖJlE3U1z{9v~{]T%#HSJ^29F__`ڬ "YԜ*Nd /4A}}s4>2$l\qx.ehϽ5Ii9e2לitKRxZh03S \V9oOqdK$I\J(K{f;LZ4N@A<2Q*h]mm D%e&u d f,s]J\FЃ-:TI$ >9":g+ozFӧLι"0T4Г_pnY]^.nP}I7kwumy-=[tV]Py"čtQe2;΀  VYTuRBO54cH_.fP3K/?͟&^ t&O gVW͍vYk&~&6 <^H㊣}sן/sw㮦)i,,YLi=/y1i'W䫌E(ɗD_r:VORG~|[! V D1d/-8=j l.ٲlmLzLz˜"r[h θdtjG~>|o|^!*JtN̜`"Iz\uVn,]vE.1gwv4/u~m?9BON9hsjBM/whr P/ڴduZ_˟Ż6bHq#q6-gVgLg~Xdy鏊mwr}molי6^MgKgmFlzy7<8[|qm}Fn붶0zЍUKӾY6t$UՎ^YI<|O??_|/z_'|pf+0no&i(P*b O3#+.yYdFtt>io4.(c⬤e#> =}ɹ[W<(u{Y4c 2QKÕrNѻ+y>0c_md)Ù)"9児:&@.R!BWYs|DC]jS>B:Teitj="GI0@^>8!*wWCځcApB@Lqe/G2^\,;z"Dq h]ؘaڃwX[QAc8pO@)Á<4|16Fi!荎1+ȄQ o@0ƉUFŝBzO=Qhq'-W,m֜0]06څ!.1ȃwI)#1kj~%O죒=+ 42 AsP>C)P7l@'߰A8%E&eG'0erha*^QYC]9eHчRbC[%s_VA&ݳ9ڛ:DNvk[~xia'B0Bh\v Xʶ(qd:(\E x`%o̊'h-8@},*$1BVq:?ly&OT檭bEUP|xy( z'--rtqerYf] =+rUK[R{|=T13p*Q %)J6pxkVI :_7y0\JFe0[^D[SR*!0 MErF &zɖ4'͵!7{lXؕgO^rY[,؆ϩժƟ>9UI FGOS$@hΦ~!_224)UJlieQI0.¦& JFd v̢ݤ\VY LO0 s(mtHmRHтf汋!^fLK[e6y阸`;Z- tL^yX@CFː+Yh LA1R:'l|k½ɹSF` {zQ0#.}D>Ę#DYp Ā6b;8Bht)ęUXs#=i(왈ɹe\>)(ٕg.ʑ#i byhhw!(lȥdeRPJk y1=r)8;ѱ+{wٍO"aw_ky;A5L ^sdFz? n~O`HQQNl%v'^KNRk_Tv!\( dZ[ P^ֻ2HW+ŁW(z*\)k}x\JF\ ʃpO5jp|zJp:pZQ 4.Cuu2BJm+ V\U++ Z@L ipjpErW WR2;q%`ϐGP0P H0k[A7Z:@\)LEB \kUv":@\Fpg*W HvCuuɺB"gֳ"Jj W܈ĕ]Mv,d=v ՄI;'"Hn,JT^>fjc3GzBɊ 6Ҩ\ 8@D0>|~eE" x$V褟oM |4SQ>ӠS; 6\W=/svSn*7d\Wv=gmĵ TGʵ֝Tڭr"L``[ IsC4Wrh'wz\nXScF.` S sbIGl,3;P㤩C?\UWJjWzqoT;$1,E)'Q%kpmJ,2K~t;~vc[vbqJWG&/_}|o3+e"];.]?o_:ɪiUroo.ŐjFY_ˣh\__^-oCns?^'7,Y\?jRPm+r!T$dMnjUtJvtڵ9Fr W'&ٶ]1-@y74`Z1tLJ#BI9֝?}JY HfC4j ;^W$c]\jWW+LpEW(yG ;t\JF\ 59z+x+T+PfcJVQe5"juP%qEqe9V3hpH>t} }@I- JǕCĕPl7ZW+k]Zg+T:H\&>>_Z6b16x1zϸ{k:ucj*=:Nk-lEf 6$\]]oIv+ǡ]n d> /'nEoKd?>,(Q*fh 21 g5揙'kOO46GC;_*}Wn6L]AZemY/Z-{d?ɱ%Cܹ$k޴g[ds$oUn:[wsҾd bw֧qd7O^OH/[}6%Z].Ok.%ozE19kj`)tRΒv(Ev'}?) kV՘FTU*qƅs5Z-zPէ?il F:O [_jк1&XC@gL*됵V ڜb&Rd mh-58U͵`0&jڠ5m:MehSme9T,\XSZ@tY;\ ]ӔR=FIEKAa2c1^#!1+>ol. 8SR>{_ឈf(%N4Y׀1AA')Prʒ&S$s*0H$Me{6X4*lNɂ1s#OutAڳZnB BI2ԆR He_hBv ,;s[rݫ N7h A^Aka24ouh;o۴`g(*f DUXu>:_ Ţ˳եcMm̭l%ODJY`(.t-a64GV15k.L7m\UA*)E` HJƬG6s0*T)…]ٻV(V*RO)8/H1rlG?]Ze R5Rs #̆WBסN#I9GHM餻P' 58IH&`eH*zbiu%fH57]J@ e :oQ9Z Ȅv9Hc';uOQEgyMFݚv#$$XYu0wq b *a<۫PPJ S!J@T`N `Ʈ= pXie*3;(@Hq`Q ڳ;"JPAw/uzV qJi0m+CPwfnqIubl[NX-r1~B`X!gѝmhFFP4 jݸVU*"> 2߬n$F ,/^mYxxٸ0jn6vi`7YtY.o`t^]ہsL7\.;5B'eKa`GB*!bjЭq5qԦ:zhwKCÉ'+<f\:&asS?B '>{o$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@/DAA\s4h( H|@$> H|@$> H|@$> H|@$> H|@$> |!;p4> Y)K1{H|@$> H|@$> H|@$> H|@$> H|@$> H|@^(0Qţ-g}@i$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@/zcXOW?vvz~QVW盧[@V6ٖƊGc[rZؖVmK@i)m%ؖ>zG#+^ţשc+Ej勤1+F ]m)|{@ +6ChjGS =w(^"])7bjsz6ݪ/oXs^^z"ோ9~fz9m_ڛk[]-e^khݺ5u|>UpW2n~zqW>߯#F38Տ?\o4 ˦/.e7Pj9-7yZҍ}c9-7g7ZR}7t̒- Ia}zFȚ#T.cxcuJ8R1&Nv+&vhn>-Tf#j|[n^%OZq[:]ж3M.'AjG EF7#ƨY o?m>Oᫀ~ gP Z2YpǬ,Ae7֫o䆭>i Aw~ȏŒU圧~W~C65*'|%[°@oUuf})RoQ8&r-{Ff.DrEj&n"B_yN3 D?R\*SUR\}p%S<#B52fjwTZ~/0la vů }whhorqoܨh8d.8gjx>9KJy%M@B" &Ce_s ó}W_h}ˏo{ٙ.ʸ G+Jj*0KB^dRGnVQx@%K&X39E4\W,o`8w!ӉYAdzyوhꦧ|g%ad)H 8Ӭӄzb<"f&yj.;qQ2k*U 2JD#Da=Eț;זe40/C׿vYuލ.IJ"19)LwW]rȚN&KKP_-wȌw~ŅK&w{ի4Xヤ e+˿8o) xk#1]hI|ks3Evʹ\k]3oѻd.0(ue&[ y $M"eJ2 2 X X P[JپOªZXaӹ`,-ZJJ T"po-D6h.]? ӑn\K6D+++H g * P/I 1'MPr!luA@9D;'#AYG)b! y)W**FpXOu,RDu)m)ř.1bz}= uXV9߷~+N r/[cj9ԮSpYS] 'L߷.|>V~x(cЏA%T+lΌ[\7lOԐFERkvC~>/7VuMhȢk_pPC w8g̕ekY\/I\sdcqmC zN,ޅ?C%xq=i^3vusQh{ֲo;f8vIݏo[8Q(@khsR'wN<V]>.D%v<$9:؍ݫW1tm57 N\z!(I jr%x gS9˰IPTcE:Bfvf++CyL7-sɝL;3koԔsy^Pؼk#%.xܙn\;it~}Kp*[1V [;o,3#gqw!9dQZڟ^H}w7CBy辈Peyt:GcZSmԣ2ge{D0ͩ|( 7#$hH,VFYyUŃwh Dq;jv~%ngGZom.fǔ*Bc.mɀuA&cxARrEpORE';վF/\ƂdxFQC?0@i.Lh"P<X|s41푌,Ty n[mͽAwyw`υ \A7GABI@U x#_G<VLkֻg 7]<<0jaN֖>->{~Pq'ޣ:wFiȤ 96/6?`L X(: gf85ZAqBqM4Ef ^gx4L:#p~mYrveƦ(v|2R7lճI@*"KL-*6J~ȗ%&2$6 2[aǦ pb?4J(<2r%IV H}=KڽKE(Q L<@XPHT"hK"&RG Nڠh<“Fg L* } {K4IM{$-wVz=S [ύN#Sz*;Qha1r)S'pi V E0I0ieԦhK HLn.D[Pl ǿe>hX\BAkMLhXP#gy6Z0|JHj_C5';ZJ8uU9&weqGZކ1X{uc+?^>ST޼S96AH픕T!cy0 e-/inԔs+5KqV"2:Ř'.2&&NB1YϢhRBR)\Bj#g*aac+PQpNQ2k$7sj~rc.*_"EPqFA2PB$2ɔ4D>TM*9э"67< l9`Mt*5Q'AIX6$i#g3bA?.ڍqǎ-i-%lAӔ! 1I 2JѪeh׊DP["QR$ FQ+2x8C[6=l E)8Iqƣ>bR`B@K ͝"2 #bclFďST슋a\\<❚.6$C#E]z/7RIzjQȔ<L .>.w슇 ; Qa_'+kV%~9Gm0:H{s!X Kw~M 4d|I"US Ad+!9=^KNጏS*B=k>OK6$FdhdJXMTxJkD&)a)2qc( Dh,@l_[1 PQJOu4o:)r67:K֤*|fr5ga2棸+oSjE.GǃvDڑRGӎbǝq R]A&էhY9@&Rq}%U#T1bK10!Cݑd$I_v> v{ t>GhiqE4II>}#KbUH ؖXɬʈ/673({UYa%#;u^;je->X ^5`fQLG'hDdj%;>ŀOw'BJsqc QN錢`Zs텰XҨ羵۷7G,o:PNrL Y4{i(3Ni/1G6io\KfgѦXc9ie* ^$W3J&y4h0ŒJ s=~zg 3]^ǡ,LAbbq/ҖX$uWoB7Kt>먡o Zz,fŜ6~1& _]%p,=VPnvD`,Č '3 1Y k-(.߁N6' ;%NL(qdGR`ԿRƿ'U &+8ێ'yR`S.Hx9/dejw&}zJs"ג6q l:z{{SvW"FϣdqF&YvYN#^~u'#[̼0r=?.Mw׼ oUI<7L\5%wV]fOnn8Ǵ|S:RAYʩ3P`5]꼔T4t֣ҰGc]K8IQ~',q8$]tOى`q8);Rm*O8w|YNX %^HAX&ousQme^8!I9u,Hə*.B\`Ne'4Z"&l|5FΖ;:g*Kp46V5q6+eb{`ВJaHcqGs#Ubc`?pWGq|=8ݍĎEDE1߹;*7Uwihաz&ƍrG]hޢHDw i 2)57 h㰷 [vNy1i89:X(^ C.Y&l H=UVy&4J#<8Μ6;v(C(8.n-5Ӧ/:͔t*Cy=uvo?Vb͛|i66儬&je:(JPl 1G I(#Ꮱa7D5#eT:-$60N'BA*i HhKXw.,<IyA,3Rjr؀5Lʑā]Vr+W`NEXRemBtBs><%IQ,I M dn R/%l_#/G!0Q1/yn_S rhݱSn{u=c.s\>{7웣D&Ns<<%+LNbUyp~^uY5l4/z%w%0g8 G^kjE40ǟ+\_BH.9瞣pM%I?-Ӳ=;tN}]E{j8)!a'qj"EܖHJ)6 g0:5t;[bc͠R~\m(XխN6T<0N=l-4"B5X-f;Uǣ$p7.)g4{Dh؝r޿ Y ;(S:yc'~8>+.[_WwU"8Y&;Y(m|GW/nF }--VU^5ɵb!_] e&ez|vv~:ӡe%Y ࢳCSuw-uMஹ(T/R0Ѿ6ya/ĕ76s2̢~. 1Q18ME]mGz\iF?`SSq'rMbşBUOO")dµg:yS;mm%87T[A`)@ w({`&lo3E3N0vqg&`O9{B%\a;͋=v7gi)MQ&W+'#)r*jܷHi{(~ڵkVomkv^E^Vo6nfol}+_-W7gyN AYՁ`mlK"퉋'=`5DrQǎ@=E6{[d'kcUa48`Bhʭw]pEHJfRx mF:B/Z.3[ZSjb5LCu!p~uW-y&Ù]Z.98]%A"R%AyD3f$^x$KS9θQQi g8F#9MQ$O25ӀN$A1AdQtAyY)ߧWݛ x|ӻsXα`({S:$z8wcp`D[8Ia\í!}\ń 5!@^RX 2rD^ICʴжg(A[U]olUl^DI-coJK:nJ=>Xa32)1ZڄG&r HMPm]Q{pϓV5dhTmmU5zOZL]ƃ^B\ۄ4\r-ۓx(;4Wߝn[AA<:C{Y<ɔ #qq \LD8g!\I+QY.*yV~IUU&O_iǢr7'tܥ}~<ŒI,{>oo{5;Nxt'o[ -\ _cFa4^[o/;tȻx5ff~%\?UeZ~ ͵@ADcO ˟ٻ涍dWP~٤!*9SɥK̲Djy: EɢHQEY"@1|飹CsB,+?*il6nGaª@Û[ = Oqޅ(%I)[3/n*,<^Us&߯ɇGPMi/QtM?n{kO.Pbfs7A#.].E.]ɴxpWݓ|<ԡ|wE A:Ʒ Pwz^MYs1<'nfo3iLK]ryas;NI4BI mzBr~'Jxih'R#KO,zp^G8漂lIOIoE YE_mMdUHz BT) i;x<Ah I6z^N L }oe~Dzoq%N6L/߼Aye{J½Ǫ=n|֍/Ie6RYQc#549rXgi%&YJJ'hj%]tp;'k&n46>lp  aǛ?O*>wrBQujr1 پXJ:UFe?zb" ;VX2>N`Z- MN.yݼZ)J*9pVT֥PSyVYḄqѸyY%=u󲔊7q̆ҬPpu7x`ȇIݤ\hpeWնŖҥrGC.jrSI2OWKI&a>!z c5G^?^=]uq4O\~eGB1v rZ)嗷lCUQ`A,.7;Jg)-+(QI|Dp+x Ն>Bi-W(BT'WX\qKW1W(dJ(6+I~T lz4pe& p%e4J%tt_;k8G2܍';I#Le(N%qR+ib B- +jЯg`t̋^>*g2al~߿?n 3͍?!m򄨋ֱ\!y8/ R E|'$@ȀY3% $`F ;Xx `,w;;w'~;Xx `,w^;X;Xx `,w O,w;Xx `,w;Xx `,w;Xx `,wN,w+w{[K}þD n{wg?BhUq!.- }Ko,MVU:S67q$4JErdy?i@JU&zKQ1l?d#y$5*XӖk녥!R":DNOPE#DF{vU{gڭvm~Vϒ >iė0Vd)o'χ$QH F DBP\S!0sWtxkDc z|` &45AxgKP(]C)Zr}߽m7q6t R,e,E+c\Ŕ"%Hgy:cFs6(sIN:A't6Ks]Ϋ-k$Q@MrR ytHftr._. y+~I]5Qg1m,Hg7 c;<;p:^Br3#Cȉ}7mk_L_LrdPI>P:(tqN{H:BkqF 2-+wԨQwsN[؎R[BO|RA&TYM.RϒJx$q1zͣL$0NY5cRڔumʾ]]nm{i7Е띅XCm?WǼc,iӌ`"Eh2eIkjISA aR`L X(: gf85FAqFq}%z ';޲ d<MӪ pqד,ו #[Y>LdRY^BniW|ΧSQj_{)T-I H]ʧ_ (v!; (hXb ,( $x 4X%Q1 Dm!O'(!xWjr]cp[*WLb+%+M [!{6|}ϟεx[ "8q +P"$Cʹ2QjS%T\Pp߸і*Tl#8x r-/inԔs+5KqwG2:Řy2ŭ2&&NB1Y0&%4P*(%7q6W {ӌm􌅢`[,mLg3^=ؤL.&[5yjwz:|:Lq).3  I&x 6k0 MT ωIdb' \d{8˰ɕxC%f6ꄱ17 uD6=#voGl?ƥs_P7%%)]<[+mAӔ! 1I 2J1eb^+.B1ngoJ#d (:pЈ5HP G(d$jЮ5YMP?o瑁q_7{nq KL_q@ l8MA `·YDCW(zV>g&B;hwZ 6) #2xG@LI3Tgψ؛8W'FMKE3.opr!):{aHt B䙌64A2. \?/{wJtm; 7slXeME[T?>T-6y0z\x_=htkߒs5xʧx+~M?F{ %߹? fc[iM{HI b}#KG]!lg;pEE=4}ώgϻ"@ y \x& NZxvbhy1,$xleT8S50:Zȷ*y|>j!w Rל=\q|Ef4_\M0Wa^ ٗV Yq5c@K@'e@ɳ}: 4,)!!'zpe~nrC蒙0w :G/FyvN9D^w䌜8(@%vpNN47?8LqV;;S[wQ? .f yԁ,D]/pӜFzG.3ŻFݳf>wYt^ZУ[vz:ɪY>i>:yBaGÿf8UkL <\y==/Uyx9~>xm)Ԝ] s C;^Rƫ̦ B[ ) _BVIw; [}VfYYޡFIG0bvFӋ@O.98Loy8AV##uY1|<8|1U6i8| $ۋ rESVsUTǿ!yﻗ鏷/߼;߽}npp?X)o.k &v9xrea|=]zl^S ٿʆjG8k?*thT?BPSW1zhuܱ~Oly^]̝B"V\ɇD2II D"C/ޓ@(}9ɦ'Z)dC(!-̱n_ty d^y0O$ }\|jQp >08!&8XpY+tb>+eչBηѾ]Ua Ef;JƗrx{ځ$џ6DZsP{\(aVY񆺤m:oU`v>0 fTFKa=z*%.CEW+#I+DpmG w$,p]l_6g,iIW͗(Hi$Θc$6NzcE`>V^5^+ [8fWzd*d&BB"k@8Б*xFֺRY Iu ]o|uG-fQ;iG[2h(k=͚=oNd:и}FTUa?15`(5Q@Ί`v܋Rßi}6S-j8ZQ^(\ag XNтċu<񠂵(Q8 2fKi-RG>'^X(JE蜋E`i48G$ ho-փǔKn6)e߱RIT!)(㨚w5.[L,f,~iDс@˼p)].kr3ƶ}cۖ6-T 2h. E >I%s[LYk$4/1 WK{D! 1 cHA3jqys24]ت)8̥^n|b:;\qy]u_Quj@UTub@U[ֺ"1=_=_E\pﯛhiL0EY֙*{ڻCQlFyiZ-C .'S P+PP5TD^j٧I}e>AmHD<ҕ@I9GDC0T&l 틋CuI\- B[/J)^Bx2MzCgۛb6 A^+N/ԧLI$,'Dc3dQbg=;8bs&9L,ڜnmG?XlFd.USj|(}ʵ .,]uylQH ߰R:?\d+/ 0+s|ՈX2 x0*jÛ<-ǜ,UFT^_f=%(SIjR[xHnVe8tsoA6 UׇÇz|{/k0Ɲa:ouugYxi{& zOZ4WЌבݽ5]n_gv2]n')>x,%_^Jӕ-b{Ei~ۭ 傮=\]xȴ~]^g;f!;ς Ng ^+Tv2zdPQu%db"9\Y! QH^Sٻ$DЩ6YsBD!9䘹P[O1 &]R)B,$Aؔ.4o 17*U876W+,Na̫cyDm{cD(Q#z$kFJ}f&M4QD,Δ qTXM%E '= K=OMEb6<܌R~*HO]i5ʮiem<4jvl#~ecVOӥMU3}eOVv祺 w?M/y~17>ƟG;c+x[HmTW][7o#]XeҨUá|mhmϴNĴP'`Z"Z0]O'WM0YM9"dxbh5:{m{u˲xVQZx!Q3>_uGJ{Ju"Jef;N͘ FECDäs$~&g3q{g/_t#m{s"ꃨ7N}&k.αɕef5"ZDF# ̑(>[)\tu{u{Z_Gx)ϑ bt@Q",2wC]0*Uʩٺ=dP4r] REՎ M!}jjʂ/戍94&_=5kaX^vWf3]{Լ^#^H\ɞ>_GJx+D- %In y=Fkֵl/du@S4uo_9@KY#,s[0]Oc:a4*յƿLǧ&b4 *,ӌ(eDA@ӗTKCi22G]hRvx}IeQfgrNjkYHɓK/Tp!x]PJghi ĻDI4Ls֦ߜuQ *Ӽzka~.ְbWiQ/>$J|dFZr $@$@኶hJxBs"eZAEϹȩ$Уr\Bu)\ѦP\ӽs0}v-sjYB >aATN q11y^Cd Z*ZW"ec L P-{lR+SB":'&JR!/v$6(_)&ӿF_bdy^)]M H\ݳ({3cD9 Z\;Ny`G)\p 5@f '/^\\~0ʟQn.2aH_A>BqA8o:D`a8ߖ2hLMTiUE}T$g㫼zbbNڶkꥷ }¼WL6,ǟ# : I! 6/JP,J!; ݕYBK[Vg 6fg7;'WWGvq'?-ӨP:#S/x=^O9wy7el`v%?OHz1aloz/~ ~Do&g̈́`J|[oSpӟ[:'\tuȡ쌚(]Q0jeQMK%+Y+Ψ ʮPZn{uuJ)eUtg5uUJl(7L^]N+2鎺"vgGh^]JR]i[U;*\vUZvuUQ+"NΨ UUEEUEizcՕD Xw'5+ꪢu?(7Q]9%+6ڮ*\]QW-]U;VH]m^&吾>?>YY'5^Z?Z}4|L(sdڍT^6_$\TVJ C ]}IQ 识_FUmoameδ-tե8uպU(n6P]ۯӛvk'>] Ƭ!?IJMׂ>dQFd(&\J9Y]vA$>6gE:N*B}(,| G 9^ݢ˅}tO~zAw1ˑ6ݬy6M>cxMdD0v~7Ճo7b>u썔k2ܬHRv'D]b$-Unup` 5d (.NZԟ,1CERY9&.6PBŦ, G~K[7&'OJiʆ8d<ܥJj7} ~`uL^ύ!ZJ?*$$E7!vd^ED/BP ޵q$2?{ˈ~0>'918.~TK)Rᐒ~SHbtKǯkvkUQbLޓ9V)C#uLֳ P@!*T PI[#gfl [}uiY.|R]8(m4ȸfKJe7w[UaԫO~Ӡt׭F[Bg$%=MA"LIcMa?x 6 4I&DI4dgEF3T\/T"jfNIXVmZح]cA?.[=jmIZ`7q|MSR(nR4D''*DP+GEb^+.B1nVaJ=dDT8h5HP #(|L2N5hZև]NnE#jDֲFȲ] 1g&J(wZ  6)t#G@LI3T(ֲFlp=Ū!:[}jY/^<ōH H [CJ"IoL3 ^J?z)Ķc_}rSYV꾪jW Sl__Nq3h=їn\9 Wjۃ8hIp}% 6 T !Ըzz r0Tc;&d\=;;сg^]?!gaGuK>!=ZmΑD *w4UMwUKOϒKbS‘M`Tܸ^GX?*Up~ĔLB)fO|ן8,F0,lw'uj R`J)EDs5.IOHaEӦJ -fd?ZWu*$gBId$dBg )ΣMxTNZ)WU$$. ZS@#189"0i5Z9)ZۆcLPF"{:˩n(Ѥ7,ٴގֿ"u{x \t,RnFg.f!2`ޒ2F LD/K0n#t$  ՠ)Ɨ/:^(Co\强{f h0yzV< " N$c hJ5J8P\f @m Kf(d CKNO a zia݅bL(nJҠ%J -*"po-D49h.]htȑ_'Gn.Ӵ Q@4`"$tTr!, QZ}+STw$Rԅ .SL*FpXOu ,2)w-2=B&}sY4W_7-UKG8~aMQA/UFYU.Dߚ?Yo\/jO2չSUgm|("X$4ɯ9-~ ܃٬Iss"DU{dqB zw9F9jkrrѩ<+߻;FhծNgɽOOpUE SA]ԁ&WtMny gl6RӔl |jP_zziG*&84u³Aa 1˜&4 ?8՚-lr{-Y&׾IoZ5i+_,Qh#̳٠߻Nt5g;08AO##:亷؏g`XWSF>mv[)ɦSA6tl6YugSU%_Pw?9_ޝ9owoFg`-ZIywL5tΫ6xqM?~׭BNлn'y_eOf|j>#\͎f .j~Gᓣ!5hŘqzQŚmudK.%otDI]"zH& "Z7)cHd= 42@A6=Ji$S%zTqHiāA;<!!3'eF5w\Kq - Fl'WZ܎n#_ c컑s:lN׹7蜶Vt^&K +(Kg;k#$hH,F YzUɃw +vԁۙ%@Ξ|y;Z>SV2RmɀuA&cxARrE')olt@4̫`]TN[HIIN'F(ɢ"-١4Ľ@WEtO%\W&J@$5ZmWtxu'BrG()#),Z*Q[DpzQ39ʈ(ڒt2z GV *# 6>P\1)ϼaؘH 3:9,R6[B9[5kG~2$S3 0/I@(g`ib2G; Bv}Y{=m͢Ap Goud h怆ܻ\ ʎZ:BKqr:jw%0eZ[v<.8 =is{^v'>) JQ I&K*";$.Fy)k~R 2 H%傩TMpEUbrĤ5rvDC Y ?Dݹvo&D<v=rRu;\5CK]Xi3y]IE"<_9}IcfĀ#)D˓(FiȤ 97/,YKe\ZGp Xq6y>LGF8(PR m>lgY/p8:'pvO+__g j) ޯ8z/Y IZ,kU\6)ûN67\H˃5RK.5̙Z&'B]ȣ y.d"$y)E: Д8"NIQvO"1' "5ϸ(ѣg0Qv=Ŷ@'z ߨ̾X 0{) 7;q>e7֕a>]05z]1GBxU1JQWEKXUH($ ~4LeNIƽhi:/ R EQp ͩ Df)7JiK@ FF)_~J^GWI\Im8X԰*0K`° *@%K&X6mȮj}R!6gq J uN.RiհS @K\eӃ۝E8\[BtA8JsU_+W%:Eqo+?ଘLKP,@*s1c\"٩_Ců5;Ct`ܴ8ҍa!9$ j[!65}&59j2S*j":DLʬzrp/V=)AWWt8 x%tjۭu/r<:P;Z\:SY;vs"7VeoN,]vk6w{dP]?챭غsi{~Y-j~lb[NڽMyg-у=/yq}ӻ#C1;R$ηt<9*<=F!wƚO軺[lS.yΐ>^:sPM[Ih둶̒_(ui!J)/F@NJ) `\镐HМSWQ{Ұު գG~IK6/&#HՄkJ4!Ld2%,!cqc( Dh,@|#қc!TG`r5rv vY5zbg>r a7T~Gx{ZCk#:z!td'CG.83E%qsL!L:gbJFu)&>"G |?z~æ 3hQe1ků+i|~? F9le-k4@ҍ=3W6//D3S۸kȧ K[bޢEPW<)qk-|[fmod'fq^ll,5c{7-*. z-Y3Z^5m-%K!-)!I=(~BUR8r~wVȧv G\#9#e-hZjD wp:mիo&/԰/d;txTgU:iַm [3ɳifju4a6Cs~um؊ɝ/.nό~5i&ظ9yIș(J7N &EcJpp͇L uoKOOH>P?cdC:+7?~-3KC)8^z\(a+7|L\u ]bz9_սyKɕzcO]%"*f.sg%węg0 LE"OwajN+᪃cLM>tꞴu iZYfvvKov%!t|_SNfy ?cF( Q*Z*HE,%[%T LU4edС/;Tu0Q;Dm[Ov''ZMcZ9D+1[N k,.{A &Ltxcjo!Aۛ0G2aЧ=[[].HS0g2W P9CC6*,f{j|_j/| ۳/ͦgS~cj_t!F<8ItD7~TV:<~[e3Nm?IcYfK0btq)%@Y-(46yJgpgFKgL/lbQa2KKnynOo1)M7dtieF׬zBC\ T ϕd+qv{n[آ(({{z09`BHCK=pλdb%f1 ΂Beb2>;.GcEAfS3D#PkRB(A;㌗Iմy] em @xL_TEiRv\RњJ)3Z)SrFgyFt!h2PZḵ h6gݼ&=Cg}}V+|/q$qHط!.m.;|P1K+Ãl1hWtYh/|=P$w+0]u23_]^N2:*!qo= XLO&%$lFI''8HvQ-xb~ /É}ϖNgcS9qE$<$ZLD 261i{^oAV㔔|xE* tÏˏl@DWVlV&&qxglDsrPlER\1P6b=4Cp^?]-gw//Fq9Hn5s{koZ<_?M+}R?p]|\M/%Rh~&m=b_҇ɰ [b,'}~Y6b?oiz6\Kd,uܤL?DjLV^ZGΏnX"=?7x>1M:j&l7ټy Yt_#v;:qVD~/_+'/<͹G!pn,73Y;GO,N$bI BoM?mW<Ц\'@7F( Q*ZHRW TDf#$ONܣ;wz˞^URJ_.D٤Vwi Ō#AWd^Ar4t/:.Ė]/Θ0>c9l#9vC)ZFWb=]Um+*nBK~jE͖teX63,ͥd,M(%O$tGT\xW誠v*(JJ `;CW]+BoWWHWPJA+`EW誠խm zzBPYS ˪3tU>k\誠h;]h{:AR`Q+ t \猪=!P ҕF\v `c1 btUP^]"]ԬS 23VUAip+u M}!U &͓eE}:ӢM4/UP9]Tvh؛; (s5M٣ߖgl0'0x8 Go,Pdb5L僜:NQ#S&Qd  p?GYZ];Qcwb;oPZvH(^*(zyƔ=vlq=6nhHtJl]Jtosܫu_ў }F&qA:ai}j7=K K e+̡;E*hj;]m FYJ] 0=vn7ݡdNW$S=] ]FۂZV ]Rt ZmRɞNP#t6]vWW=] ])bNԶ3tEpUADP"J[.9v `U+tUv*(mN\V7٩1B՚[FZLGwmZ_v,>_}vnv76+NҢ?H8FӠNq>|9ܫg;{PI 45hy'2^李3FCQodCWTt2Jr vz8kvTyqj?/AXGRv`zJ#U^|PFf3t,E14*(3ZN=M Ms#Ulʡ+Kd?t2JNU%DWX2Z ]eKVUFixOW'HW~ ϥ XCW)F]!ZT*~JRCZb@b*=,=,4+xYSC9*ÕŬfbJ+ͤ" BA1tbVt>(U?~t8i0,{ՋQ],4ˊOث}'xeEȀ%/F*fl"hM'"_T;v8'A&,n\ui5GJJA:UAWCR* h:j,`Ή J(!GD4JKxi:ZNޞ? M3}UDb*h:]eZtutŁBw p,2ZźNW)\H N|d?hGxnT=[lKn^_KϞ~W??\h-ŎKzuO;[rsԫy8Ly1xkZv1~1xP٣8[9Ûy T[D,/ LrsyGZZ2ܡuy|?ɢSi(bYqnFd<奺x-kL }/Ts¨O|Vlw0|;vS\rtv򵕫=grhz9 >PZXˀe9e{К΋5DɈ 5RV]eCWUFU QJBz:A`!ѕz W2Z J UIt /g>ÕhMՕ>B97pq7WBD4RQ Mg/F4Q 4Sؚ B\b B9t:]etutE-2\JVw^]eO@IJuAt=9~pU1 29ҞN0xwk<}SZ/s4/c\<%^'oφ7v6W0.No&m]ߪl.9dw1@E MU&iCxJ8\/l:ِΞQ7 jի?___ ]lohÁK.ZF|}1\_/Q}-3:w>d՗}1X^f 4M]M9íUz-jpGTsl;-d5Þ(< !K)lU ~bGBGJ{5.e-vPx}8H/ԃ =aNxSf1\3x;g>-> x~H-Pt_|<߀&eߎuW؉Xbq5GLC/H`zy.h17KQ\Ͼ|\4?\LתKXVϏq11kXdv<؄̵Ltg.X4ϐ>4#NbԡRB| Lz+7+UtFŨfzDB|&7{+ 8@쪟Gxw{ҾFVQh$8\B4qEFdv"h x UȀ9v!ă2L4ydEy -mm"eOkĚg #$a1O] q6=.uwz;Lkci8Zq91賉ZS(Xi#kFd.V*dC 4w@3CQ}ё82Gb% |ډ7k|V+U%g9sL7í~ܸi (hQbt1/ucs\xvN>,o,Ay/_S;"bx _/|::EMT5n,k<݁J8K QL C#al(q.jJ\`i3=nk:R!Vw|%fmZ)"!U]4DSk5Aoo ޝx ,'o:EOhqʄu6 "1"1JR]\aybrHL(Em)-pPp6V;E!2azuM-˽om􋋴4`ϧ3-Z_h[qw%NҦ]v)[M(uQ(J0bG sG I(#?$aor:pZSo:W:-!46pMN'J:A*e}*-g',ZĤw݂9c+G FM"}02*KzgygE7lAs9MgT&ńTDoD & 2 }Q0ASATWu齗ǎ9זtmٹ{1C&3ܑ&r*ObiѶ儐 C]m;W%Y(ךY Cio~F")^j/y'j:BMvȾֲP5)arZ\DB-.*`/O:Dyf^ݹjaޑfD `ޒB1dRw={KFPN1{Ԃ5E}Dw yn"}B[xǓ".~V\"Ӵ' uFUB)o GqEbUJKj/d5ѕ=4vyN5V:pNJ=CVH)g4嵷F^ߚweCjkg+- Ч@%ڎFvْ^jijnzX^S6.pJ8&Z#%Sԉ+3^F/f4)˥LQ{M^4~F0I4[J޵6lŗ+ޏHha Z+NTű}6EnqBINWwT PidleLT92hJphhvDA.++8CD0.yz[4/w2~E+͎l<Χ}&hH7EjSBZ ~@I Uw `8 059W}0BrfY>ۼe(MP밡fKkyE~] ?*jfP<]^}T{U4]40NGiԭ3H8u{ϋFp4i/~ٛQNWɴhLǀH+B_=f4 7|WwjyхO=}gDǗF}}wxYwp1ef=c.K,v҃KZuvw7-o}GM]`2\4#`9~iB26BÄeg.ojEN^uM-oF}s?o'iڦ3AÔ믘{&a4.kWP4 ";Kȯ5}^oso:ca*jGӆLRw;۟Z2AV{^ 7-"=wbY>yld!DDIs.\~q$JxiH"R#K-ԃ8‰t6k)2)z>xXB ^ųu< D@l†"D (4ƤskQA#xmm'(́!uvbe OʖE~.mҦ7{^%$5R}]o\𫅲 Ӈqx]M-z];~Ҋay[R~'>) JQ I&jgI%hY+ԾPp OÁ ,%nl#h%ܒL<@XPHT"hK"c&RG NڠC'EQ LZ\ -H2u }EJ{V blPTٻlM? 9<#|轓L`ܨ-X,$rRM,)J悂#1eFD[FT$@&wIoH,£h%7FЌIZ j1r *etgԠ3R5'/welwd?]6-9Z.~]q.B7zK6_^]~HQ2ꃐ)+B6` עbI 孄^S]N1"[xeLL2T=L0Rd=;P@!*T PI֌Y*ta1VºPuјǓ/д7t:LWMnAzn=5H%tAQL P4L2%5 <؀I€4Q%<''b"#{YV\*5Q'AIX6&)^cA?.qǖZ[ҲZ[{+7q))B7)b"ecTrqżV\&"bEaJ#d (:pШk"0:1Hՠ]5bׇMP?gŸ+l c>kĕ 0NS.`Pj4z;BdQ }0M+WT#ZCQ@2@ NmR*F$'e=1)VTs0!`$̈́NSi5b1rkď瀟ڣ^g;w)YKՋ^{׋+=46$C#E_z/7RIznQȔ<OH Ͻ^3~b)VwJ>F[Bk2 l`-q=n~| #程QVciO3Q: 4;N( tŎOFԮ'cdwRߌlSKw$rm0b\ A"4zp|^PHP1eO"a"WN*W8G XR%4YP:Z NCwsHݡ}&|Zl&:hSֻq7z^xf0X gz.Cdl=2lZ_lmJHsku0L߆yO憆*Sf ` ʵKn1( I^-K *Ņ3C+lLQ ŏ N$^ LDc3Gq,@wMX2Z[U,I#I Jgya9+FΆrb6Q@6:iЂtMQA&@cbޑ>?HA xԹ^J];lN) ,bQ rqO>F>D.#9zAR<ljs¸Kqo2 $g(@$4Q"6F 1$m' >DJ)U;U2Oll.lH]&fZ.X&jrF@萢0jd0!).V.#MF5Cb,'{vHwwL20%je%Ɗl>UC(pi2\h8AxgKPX3壶4EJcaI6ݥigK>˵W%J" 3f_Pfq =zc$F**4 )ɴ b|Bgz +>/:Vcm`ZKІit҆ !2X1CX+;AcPȸgb_}F \WŭhN Zвx)b BDc2WAؿ=+Ӵ Q@4e|GĀJ\Q.mph|TXxgH2De Q tb(b ?0@IN%ؘgB.Y= mdEO? W5Rs*mY/5\u;gEIzP`wu;?VVYc4iF}\~%RV/펳\* {K_o9ٯyǸՄ&\,y:4yTړWL̛5=.~{dFy~64;&-g_`\ >"mwoio.ύ.D?Y GJ~2uΟPշ$fawsP(&羘9!wD䉜ȫW5,aPNi j).1Z''8" B.*u (+Qɍ8op:Ӭ?nxܿRvlwxдh>ۓ_ztiǽ:_m?5iFWqÖõw9_6Rt2>^xmI̐91SzݳvnۅW o>=?,I# yHˆaˇj0rB?h` DWcv//'idd\6wRӫQ[1x77//r'*Y(ccb$ٌRZ^#gz򹁵<:j!?O0xlB6/hBwN^G,oۚ#_$ϧ,涵Bϱ Smw^GRtWmG;o[fjvjKs3ɨ_~YgN6-s7¨yà hlNp4Gkڦ^˃ iK a#xU! @0C0Cku8۝:W[S pǔ*BTC2`]Ё^DM0>IR5EZZN F'kۍ#Iyk]އc_<7!ޖ4#Ot7/tUQ0]9T|r%,-J>Bfh3`+Y S qޜ] yıU٠M}L71w>cug" r')B^8'K3ijbMɜu>g- <Oq-8cC(vnC\<N۞bKz5ɳ3Lz._R-:^\amGқaiz RJ LI)Wua}8 f΅"w]^cQ^ g\^+5p3򱋾[__Zuhqx2h]:Y6obB[Y{}BLy]W/&"E;| $uA? Pxk6&Nܝ @v krx`='p _\5GL;iGC5{ I u ILO1~ڂ[u|# JCh5A1&g2'֓)#K l9`A~9B(itqЈD:Smp$RL$b{o['8ڽ͑!^ӡ{Fz`fmaD6kHG ^z+yz ji8OȀUǜy`0} ~n .xp]с|`+6LћmFdhSх́Q뻫Hߑ}} /D!Ջ?n i]K[ -SFd ޵>R%X1 -dw|n%eWM%YЂy#(BR]ܛLRj9e<^2XsXO3 hX[ԇi0.[݁cI} N@m*6Po,YiIlwLqIiS;, {(*#[n^4(P4+s\NNޜQݯ8Z:D -I7f iy[zH8 9B1ZZTM^;+ۖ`>g-&{DCyXkR "s ^F# H 3܇9+D N Z7#!LO 'r_K￲R'mmBV^%s5vZs*%1YzU2@LT#`猞 W6>1"ȍlVNDxHl%a6 hWrZ؎X鞄{UXHHI_k^0 6YٮaIazPX/3məab7ISՂaG,+2Qڬ XTXm%K$ZGvim=NuFe@4ZN<$,Y.),bLPT@t[ 39|hEG0( Pl5cʋuc6|kc-suh %xCmc wK+ǎ;W V< +u IiYȆR  *,nhٻIog V^ }HXA1sZR=!nAVu0WK%=X2T!̈l0XA:\12$ قAQiTߍ;|t[-@p o08VP9$YagH"zs>X@,r3xZG d Bfm`@CK>8u0 c8wKmM5uSS5kaxDbD3/l"l 3vV 257O52Z ?aΐ.Y+;fVEZa.5JH~ h oQ B=hxgo5KP`FB-hjAb6a|y^Azo^vV;HpC[y M1 o#Ef1xСVNZu ȹX!  nn0fxG\͏FvQ,89|hKC6᡽u})tl -摚@pEPAִQ+[uͤ'+&s` ^\\bBEly~9(m8N!D]x -jf!w* X0mm뼰"E5#,oAbb8]2LtX,Kd(tКo,uJi*nIr`~x{ "ir_}y@cCЧL&#sTV!U_[|\*䉲,*D wnUͫ k,T;Qs)'Vekk8E[,Q`UvXT|<8Nǐ˦;T+11G.A\nkw%H?Gcs|u_=塿`٘9YlN_ͶƀC= hܦ7/./ut{^h 6o_^`[q4ԛCL)zsq9}'goߓ83>m(gZ[y5P./~B*eq }$rJv;'i}:xϐO@*mk$bl :-kӝ. ~9>ޜQܳs ٛ{"Uni݈`/{^磚flŻI=Vh}vyqOJ׸2#y$6d=wV-佢]jǥ+n:z w;:tvKӄ'\}z]}nzsM7~mno+'\9?{Wܶs?6^ 3ЦiofzN3M4 eJ%;iRDZ/t,Lx >ujuJW^m=Psv}[5o.:|/LϪ{*,t7okVdӢ+`뫅2bI{w@{h*t@>L`;{$3Y`; lg,vY`; lg,vY`; lg,vY`; lg,vY`; lg,vY`; lg,vf4h`w@`Mø0-9yw a Ctn^,_NoFR~e p"h AdzD)`&xtgM!;1<(:[<댴KnbF _D ^r΃p2J"`i"֤A֙YbYS H{U$Xf؃bt2{se6rc!!ޏ]_=|n27A&D߻8,ɒ6%CtȒY!K:dI,%CtȒY!K:dI,%CtȒY!K:dI,%CtȒY!K:dI,%CtȒY!K:dI,%CtȒY!K͒6h,u9q h ?QɁz5r;F"1qE_2!TH9<":VpߨzicPoѹ뎮: ZE=~S+/{ڵ@^yҙr˶9~PzK{frwn+1>2pYv+hn;ŧKgO4y_|(?[0׫ݚt2~}>&_ǜ^JȬNHehnhs1-w*f^oIbgU;v2n1څL60>E-)ѭAqikhPZ!OIJ~0pa X{5Wڀo3)8'G'喞VjY&w̗ߝ2/ѐ;MUOiFk~_iԿ}b3"|ބ;~^{~ 0Ml&Du٫ޝt7~ɜ|u3vry'F,(e K/gBo^<|7fY{Y M _]iW/*p1y/s0unSM7oZ zmaYL6hpUdK3gw(Mͽ{2Lg&k1㼴x+uQNAKxMM.M}xL_&w^tom;  %K<Ag;H <5N8s:˨Ǔ%7F+Y{Jajnء~>_pͯM |vj[Xvt{,R\颥,s p+wFPsXIrhyw,> \b<8Aؼͯ]^~j3A=M~AL-?JլFOբJTͫ;4g[o.2<PIʚwJ3w|c|uU+'Z.pDgPW^L4i[sUEy7Jy-T_g2Ten᫘.*pK|v²u n/UMQ0./IcNdbp[i \Vd^lC" =d^×>a>ssJgZ ly=fbI2kq\k3GB{IᾥeJYRI,w>UƸC Ǩ?TnL76ɎRԒ|Wro%>~ (o.sc3#F!ч{ ֗novmkZ͵Gk />lo_J^YElKkiʍQ[L!cYR6ՠTfJᎆy=AZfpge魉 D ̸0GEbR%I%6#&a)ot%$KbņPPdOP)a`i] o9+Bp6,vsv=,p%ZN{nI~DtL z|U,VY41A1ȀX*e;YYΪur߁ͷ0j9%di'P-&Qg"cP"ЫU9jPL :@MO3E~b\$r3H[v5ыnEȮ§+ ߼ᓴk]gsIwҔ]\餂 +'k' 0HG5U:Le0b-50\LCnPW/7.]/%7OH8=Qn&lђYbc`:`n03(ġQ˄I)j=b %eGZMTNh)6I-XفO`dQJ+3R&3 DvZ:6{a&DAp!X)E+PxLr+}"oM$g H\ !2Bd jU1ut:L1ͬB=+(2gyI|mj 6[ʓ,9a$IdX%p/\y Gاg]waaȑ~;qsF%s6iUO L;sQ?gF]\QI[MĭqX=^wW->9廟DzGS vhг帋{uEϴ{;Ja݌߁(3|KqԇuooU5;+7` kۂˠKNh?E)QJLNjo+C%AɤmN˛5{̂sf1%SoD :@]1L]3KN-ȷ-^vJV]36I^#c$Ȝ FFB\MzwAnutq4xؗJ޾cvDž/pQ {m/͡''kvzbԌu>9ŠţM#L@}P@X+-Gϒ sw%jGvkZ}bڙr6&߼KFH/!gr2KP|`' d3!0ŝ֠ .ї-u.gy+F<Ȉ Ei3aCiBwII!  c:l٫,j<;re#ȹ%7g :d4Z"5%N(1xxy;l H?cv yqq a<-d3FK?luD,G[T 99-?Yv+"Ҋ`xuA9g9!0#-g` $xeW͠o^4+@}A48B(DL) R\H˕rNە[P@3CNB},1 K G,`b f|.+Ѿ$;վfo\ƢBIKI2\@r)BԐ2dldl! d_{o ZD' ЏP!z~xXeܴB8[#*DWQ oPנ֭#/-N? z=I89^!-cL[}-N!G xbZ܉e" X43 m̖qゅd) M(#%O)XtfQH.rՈP~1JBT*f!$Y&&65ek=H䷚Ȁ s#&uN'`4vGRݑ%Y_mhvr ݙ"_McbY:P KV0@V9̳ ʹRZdΖ+Vrʁ`-6cAI8=JydZ򊣑*GVL5r6KROrcx<ͫŵVirSc>EY+y^zDs|hv֟wLmr>hLNچ&ܔ6 Ȭ0(FGr(~;]Go^8`dNh8xt"hf1H%ɡ d\"zDFw1eHчRH+gŠ;L-_V#gKrާv(n_[~piշ;7B0":tX,!d%`0Cdq4KA-)DAH$CQA%Ȭh`2Z]EmJc#ШdK6oǠtps\g,ZYc(r1EkWuf}@k3] 3 95CՊVJ˕̴2B:>XhC&dyQ%]Ji!!e]YvN} Ÿ/Eexq@ ]$?HN)XV bLI N3HAl& tI"2P+]6*&"'=" !F򤅲[ dzFFfҫf&\%}<Ń^\ȸ'NX݅\9 9 yuyЋ']axޱ#2)-Yvܪoi]E9 4ُq#g?N?87 FCjPcl8T3y}4AMth~W&&s4 n7^N ?޵,#_a6,<@b6wѽxJ єLʒ7DJDREJ*JLs^9t`b.]B?YU]2<߲& )RlΔRw$]*F%E 'z{!,a:WyOEdpy{1>p<ځԟC?w1soھ֓n+n!fi>1/.$"i<^Wʚy4Y@=i`izSgi7uo]Gc}A֮vkDc|3F7/ mBw GT6}?%_^K >|DPJ. [hP A+.-}N=z5c{h^󚡌P0V )EXs{IEA!]t7{SX(Ĕ“HUJ'jyJ%9 4,ayc5'L_ߵ\E Fw"FwlqK7h,IjT׳=W?&3Y&16P #Xs4Zȶ&ND?ysNv歙&mõ|+J=kkŮț57'7V8ӼIٴCj]62FrV,u^"l2ᵥxjQ:ՊCxBE-` jۚh#T\GK7q'Ђ̲ٴZ Q9/Z S &+yA8DE`鏴rU戍:G$f+Nb{z!$jHY.#DӰQ䣧'%)q/'gl3z~ލNE #u*0P2/0%=Yc=$E[Z$pS\AC|\(Jb2!J-&,|HM/. <EHBGfgR]HA9P9UN֔9[pMR.o,R{V]/_~ddz󗛎VvlzcG摵.HLWOW3*'٢o'`go5q4QTaɡ&ڊBRI r_`U*"oWtq݌}Cq访E&KA%c"H @.J`HU !d \ w;G&oʉwmƭtu>^>͹L+ɻÕm6FqL x d1*'QH*}/!' *\둤ù#i zrm/aN#aPUy MycΗ'k6;V=c`$?I]捙%%<ź]~w D]ؼ1EȽqH@cA<̅3( 7\NI~Ӗ>=k,&j=k|}xWtz=^Xw/N$dPjN)28v%TVbsHU+A9>`)DaUi2ףMrI%ܰ00+a{H謌V{hqd1P!)OdE 62%YFijMJ '^.B1(IYjg$Sӵg3gK;Y䒔u>EUX/T~69M6M6ՒiTN`夁XhxB;aCDD3hsSIw 97&`DŔ0!Fm .2ט9Niߣخb>dZ`Ӵe)6p= r 75D *Z,d9dgpOMoN(VjTB"f{r1i4(堠 JGįi_#FJ1lZz©w* m۩fP^r'G{z竜 D`aR\ZXY2P<1g5(NO<ﳱGiZEw 4gr5q֧n0;}RSQ*B?m(ZsuN-ExZ oȂ{ZE \rQrTh~ǭ'1gu^ri?{cٳ]kɎسU2=J~ zI4;lijijm?NVZiwsmW[<9\drHij7sAvnV:.r;j߮BsD 6VU|(|YbS&9k!YI?kp5/׷[MPzLO*rۭp!&jQY`ZZd%? LKeѨ \Us%v`WJG+h!" \Uk͵Wd% +l  ;b+mWZ J>"\i0Jv ;W\ \EZz1!ʀXyv9=p0O1Itqj/T?uklS_ixCN}a(dfR)Z 5OD87Ɖa3t ZێJ? 6[-s+݁j댣jeO%?$\6VwtpUDŽ+{K B?<߲ɸ8Yc4>QaECv ؚ@)xgzЗi_2L_ Z8h<#v{il:ާyܦ$J ’C@!e M% ZD`GCU+i{yo1_ F8-\hS'ݬՎVvh=\jB;<L؁Ϻ%I|ncL30Mj]j2mj=L@O5U5uWEnWJ)zp,1.+2؈j]j-Yi{! j1!Z"\zp;W#l16W\]+֞׎V@}Wǁ+jm~avtZK'Kz8pe8T+Dgચ:㻪ւk;\U+MjpU8ё/gK>3(wkJFoޏesKMUR}Z7"MDUKә"spBFςM0Ӏ˨+uMo[|PpKZD_J#EBG AB5=& Ie-3E^! 9̅ElAW18 ǤRX$)K04\f1s^/3T_P_,tB}>٦\tgB!+X ZűB>;ۅ9]rU;۔\'H#kj}X Hsa&v!Ez^ȽA)Q_M'*o>}UMۋt^·\fd4i׼m^.kOZOnRvӶˑ_HK3Y(;}psKJ4]RB +eM<}Li0X}^٧u7uo]Gݿ$}A֮vkDtI\r3x{i>-MmBw,pW}p8nNcT~oåvE>_l{axaƴV:lwʅ$.6CnA%=3KK_eSO5cy^s8 H BX .8ЦcNyIEA!]tFBQ&6DP:ᒵJj ZIs"7Ah)*Y<e^ykNHz5[sj[uyX=:yUə,HTHƒ `Pa+xFTB#D L9{{';]Lw[^b=mkŮț57'7 8o^ߤl!3CzI7le\( \ i%aY̽(EeN:8xζ^voo}wSjy'I\H֠)Z\K-xPqD7q'Ђ̲QR)r>'^5@1@LV(p i43uHVBHԄc3/\4}_O'!'fwlܼ³M|e f<_%O;[AFIzF5bb#ϧK]|~9=cѳxnt*:S2y)xSyx&A .%1I\T\@H=i %˫ bybNv@F3CM;$^ꬮ̬/2($2`c=WS!cleqc*h%{gqoR0#4'(xAc83rv)K]M4\CZ` /|;`}l:v6#@U@ h)jWx.SUgO`j y*!L J:zڻCO NIK0X΅ 1 (J$BjEX1%|0U`B<UX-zg8X$"﵌FMFS1L팜c7hVK V8e)⩂yZba]Q[nrz.\y=;џ@(Սv0ru "H , -ʳ,fJgYPiq*]0Kzr95>sTDoLΜ24!jqQ~-f"RU* s>a2w]dg-!$'̪E>DdeK7J@ A*X$ӆACEIׯ#5BYb(KFEp8d*AY@:)p@cgU|m/3Kpרs/Y}0L8LSfXv2@E[VVoZlb^`CX4\eDn$H,QOL2k6wz@Fv1Yluyzs:GŖNm6; mBZ5{Ezwyf~kn -Ƽ σB+Yw|򭐶䏸5'57M7w-YF'c1]nr2]nrCr-azOۑ.ƺ}W_Oخvj=. 6]G24)Ԛx2\SIu'$T_bC-8>\]=D(D"W1W,\Hm-CO=OH'{h%-S1P\ZQ Äk<Ay$9Ah(7+MqDv| (WKM)J;f2Rʃp-[0qyֱu9;9%lƾyZ͑ǭŽ~ϑΡjyŎz( (i/h6K?w=jA.空-" L2ʬ `Ez 22g=p`$JJ,R|PR[lRTB2\YuyIi4xٟn:gE6O> a>5d:pżlڪQ<7z?+Gk,5Dl4q #Sd"Hxj;.5rQH@FuϠrot:SDEE'a[Vm*a']drЗP i:1ӿ?v󠈠Ә{xS0;qxeJ양qCO/`uv tjƭy%[<9+O֊킏a2aDD&p£jnDhf o3w5gS5x̛moɄT`mmy< -|W(r-c1KA\v`I-: +Ane鐞7A0ɵ r B3@cĜ2WL#,8ZI}V{[lk#l%Ki\,뫴+O%Ƽ~mnzGzk}!X ʎ~2yV E+:+ u:v6x#9S& ;tz'ȿ{wY,va$R;>%&8'ES]V[ƙ fAP(XTG>¨6HO%bj| vsTu&iق_E@TuJTiALF2Q/e={R6r$e>\[G+ q*R+@1r.3vb>z4:! oJS$Pi|kq&-m(5ԢPC4H2&*kLR9\بIXjgAqIב ĭW<(tޟzz4o|4R)G Dg1;( o|AL#<])&-;F(#k'/(F$AѥBTV<QRyԉQG@+DA cLt-zLO2G;RXWE? m#^kn`ؔܝȎRJ< pnp!/C=s5N%h AE*V;pXSa 9WW$לh7Dk'%@jA+r*#2"G:),^İ>9?19?}x<l J!"K2>{3Т ?1qi 1^ʢf펳A_Vb|>E@Yu( V`Sڠy~*y]l0+Ԋx2fX.bʥӹ NCvC 3zQ{V_"%4n-@n&_zhAт)_*0`*h*ZbRdmqIUU}U/zA> mxt{7_օZQw?ӻ:H_^u n'Rk+sfU`!'V!הŭ?Mdzdqkqqe&$o__vTq xzya0s&5E-4u7tYq4'/ܬξn~gE_bRF`͇͏is 7x+fÛ h0+>+`Gg&śUWSz g:,}Psߔ d-绤v/B'̈́1M7A];CM{LƷ|vP4b0ӥʃ8gx IO39+NUTGYn~YZ) .,NyӼKg Og!DI:er̻Yt.)=%e0ZM@TrKaʷK`zz=j1Sci{=JބHBgJ6džt[[qrI Nrykє .q[{fbKm7҆&eg$7v0j*vm-A=&UTlR j\.;n/,9]x}w,^P:GypcR{xVXD#kH\a l )(\xN; ^ȵ%!)E%Lmc\w(cJPQۭ(ā.^EI_7H1.]u/Eq.nb"r[)Ќ0y"bt;Q18Yv&'Z# \H1gf`hA,8$ #D!?]kor+&)uWwC8޽"w7.'CG,X"uIʻJCEj8Ù3=UTzD ʂdA&鴍`#kP&%(pjkLZֳlg^ϟ@͖wji ti0XK &^d4XMEǯVAe[7*Ц |G7g"! Ğ{ BƘa"X"9زH(UJt@ojfjI~o^`IUJ 5R;s WYpuk@'tQ qtj55r5j$looo2MK<.x&H'|N) gS9oMnal)~Ӯ(_k"aʷzl ,c< jEauYD3^=S16L㳋hGkvü N)5I|ʰ9e1er˥ksCfZ-ɯ@֓M5&l)Ŧ(0RR0! u*S<>5|O + ([by (>=mf B9J")eA{T$Уf̺nabRm~YiUt>\pQ)$ j<L˒*m[mٰaȊMb=>_Z]1@I)IBfkfYϴ @< CM6)b)XaS1ER}Q&' EUNX'u`kLYvX#("PNc lnܫP}lF4O4fmw60;9nJeB3QY"I?Fxy* Y#z^R/jjD$)L,&`Y6DG2ȉ'Ʋ^Io x;eE*GQ2݉M) :~O,lr/(E=G}_EAwŴ#4pMkx4{ `J'e8=> edzΈN.f~0qI/?6 v!5~~ɿKk׃0ʃtz/6)Mjޯ>_g/> |M[q.g&Bq#6[5W^vq[]Xx2MI M{eo!}'UDru-&JYjuj_}k6>d7%HtY}q`>4Y=?`j4gWu?-ϮxS&j oZwz=ػQ6RH%`LYvjJ{a:xr7vG'~8IA3U IV[DIwWpq:\HNSFFCqb_, 5zppNU78k!y:&s?M}*1hkϴ8{0?KFyx{>ݵ_s8OUrlDY,XV{B7Zuw2j4~ufm Ff Wl<:Xǧdz;N.r]=E+/:?:Ro\g'^BA}|:~$⎋_b9k>8w߽?~?~x͈(L>/j`</ɸZ>~<(w\\p-h$KP,}y=SގxގZ)I#d,+6G>$kOb1"E$ϲI" ,D|&X˔(92pFAe-}[W /<(@&ֽ0i([ALQVjsWRvS! KAj/kהh d2Z\Zq%89Hе,ɤޥ-PCaة h]LҕjL*BtvG"R-!Cd[}K[UWvXB0.d1@w LBT!ӣz됎Mh>=}X~׃X[(>L@ƶܰggt: 6~B80Xi5T#0$`=X#OS}B{ik| "hw61myFŭBzOjĞ(ۄEd2TG벭0H%π"XD%1o6F+4XESQLl` K"~I±'|lhuNb`h2CV[4?gN7|tr]CZȾpQl,jUj(,VjQs`vz%/DdaK&d #)K$K i-{ɨJUhLkp[.񖽰&sWvV^gy_s5>EYW3y} lpt?6ymFfGϩCE6(%Jh{Jm}*߈Up(ȫd$uB!01Ϥ`e ]u˕x-Q 2X D9jbN!F/&EΞme,Ca[Q-Сr۵-rfh)4֓,b\X2=)}ɾ@1*UZFL~#ia=]ҰS}?$*4sVԲg*G*x6KiRDf;J[mB4&>5KL7^z;iȺz(\ JMRo-owŒuM6$ch( J%";TʣbVZ*3gK%F0>\JM-\| S,e# RImfl *qak-P\\BTmv^-5Vp95ӋiZ>dt'gl]r(lg$i&w9&"RIĦIaHVWNPAXkuUT&ق϶o'\IZDen ǣ| k&[6vYEڽ{gO1zF\NPFaR Hz4Z΢X[ڀBS|XؐCCfPj"\ki pɰH1ur#^n |85kxrcW5ؖeF}HѐO짙)uj1 uOȊn$[SdAm@$ P+aW13Ry&=4#6*r)' ڪ`NgˌOį:2/N R5)ٖ[˄Şxr8%:,>FT.ANeD}O( wϋOےmx'nF (^boMKtFQ/rdF1z? ~|GQW`j qJIoTjۯ_xռfV=ͫaPG?xr(F==ƬYz\ ?|(Ӣi_ A:j`2O<'r~ uYws+@awbfwhJBӌPw iPV.R ]U3VeEM骢4 +|;DW m˝ ԮUEU骢޺ztBK6 S-phtUQ +Vt`EaXWۙK 쭫HWF{n[; Wv !їHWfgc0BWu(Q^ ]?{\ǭWQʥ) 4t+UE jSBvRyHѴ8Xk5*\h|7 ɛp4Rʝg]ղ;Te8puқwAEjbջ7o_.+ cuߦ g_ Q Æ9w)I-=6Eẻ"oULWUeQ/d1_0Ọnz-+&]. κ+S`GT*YSqJ9_*Gn"G:EdKWwu\ oMX'Wd\A-YW+N03+]]*M?%*$I&UHɸyprf=-T@\.D`ouJՒ;T%ӁKUtVA˪Ɛ۠?S SÎkwA˪=nx$b:]pKV6Oil/ʽ==y_`COύIUjx+*=)Uj9)u*O+pE:ֺy"\Appԡ6ք ̺6xԁUFLK*8iprϽ*gO}JG.W1zvv"\0O1rς+ fkWҺW+6F=MV@3ktA25cpe5ԉ3]NcNr9E *U8Ey6/Iv&\prWYcq**BAi.H&´ y0r̂ie{LJL_ ]22 8YW*M+U{Mש'!M+`y\smNet.W p)ipr͂+ԭƹJU=pu دWvRJF3 Tm=>puƥh2 T.OTm{ǕW+1LK*8̳ԮrcW6WP)\]"gF&;m<ycͩ]l;ܻc(?8p<<QH0U8U*McUm4{R:V1>rscpJp4̸Z'מ:L:~gOO+p\=yHp$1̓+=F%Ԟk5*1%^o#Jg|_^_kA5Ѵj֛xw]Pw/N? b A7hڷu{:r=PG=1;4Ч_7%__].F7?߿!1͏hMz bΈWȒ V\_.`.6aÄO|d'w7g?b<˿ ߏswHNǃhN rG}5 `7't,gO }qky3^?;We(oU|+8(gG2f6omsņ(xS->x6T=W(Ko}?_}w'TxEjo/"?N~w @zS|/rp#jL#D]FT4%eE}@ m6rrl$܃zK+1 Y޴TE\7c1\m M >_`,ue=CHl;ia# jk0 a%ZHa2VLNO [NQU FD:48Qr9q ]+>%Ok~ਖbi ڊ!+#57Ce_सGJ-nH sG$\fX5$/Ս bjQK&>L^}Bo M-"}*[h{6xD,<5DqR 1ig$ Rj@cVm 3ŀ6!b(HL`IDh7\d=גu=Z:jNQ^@T2)R kX)Bz<7%XUF@ar}Df ΙTRmm3|3BF%;cD u}v6ɖ<@zm5KGZGwHY 0}ƀjڄʒ]LXJՐR"D4O!R`)s;NѺPZ h4ZdkB,əaB71'%ؑ-U [r f%D -5.M{;RCv4ܭ I֑]Zy;039"_&VX SHRn)bAn10x-{4>[1y*> 4ˎCXa ppyU%4J|2hGo ǖژ[jYx@0:iuFCVP\]**AU/Z{*nٰQ˒0\*U8ropPcohAydWX]@1IR@?[*\7o|'#R-X%C%LHpm!$ XG"&Hl/p4`b]F2Yk@lWl1wBaPwI!0e ¶oYadb= (!2ᢹD4 aD\ ϋ R\`Ab΂#a n1:?%@TcJbI*)P 1JA4b*An2OX &e*1pHq92rܢXT5H,x'D)J6wPS*VT]TEtj{SSREW 1J~jt`!ѿT(-R8(Rl-z%tH(u{4H.9nVF%Fu3aPotN}/ĥi&٠b@Tԣj0E!&$̿0v8ao+]ּo^ Z ZŬ̏G=] bm$P 3 f.\Z怉 .}tl*'*ІdզUe cJ Hv9!.6 .B  +  I"92e(')zGhKb)b倮t/AsDaYfWx$ ; cxSbQM,TGw>zBbޱ"8&[WBNC/u sM?=8N'k7je٧K6BXF8;KsI#ŦuԆHT&Pwb*207qV bTNo"8$`uSj`zX,[)E^Q0fP.z9-*jJ'FH4bXXk~zojmLq1.ƀ71x_1seg 5֦Y uk(pUzΤGLQxZRB@\]-kP2p.Cf=ƍhG8`ͨ@XCӦgJ{ȑ_I ֩}0fz{/=XceL.*FJ]3߈T%VVdcdF8-+`hC3@Q +B~@L a8ƇQ2{ xk2S:(C`i(M@勗`PA \i0eV;Am6D \M:]l" @I$GF$ p _.bCw R - zia83QuEN%g,0B3ɯ8jJ4觓2ݦ6iLrTƸAZj*Sk6/^nYew]ߟ?ڨṡ"eHL1.f Uv 1stZVi,z7M@`4o,f]ưz6PE^͇gg\C|e gs,wγQcXwuݶpm%8mu#cWI[u9K{B7ܼZ*=4 Iqdt@ TFh;z:IFbt@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD:SF'?: {{oey}I et@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD:S^09˙ w: @4NPw^t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD:Ta9}!Z](#)8~"HD: t@"HD: t@"HD: t@"HD: t@"HD: t@"HD: Tt@&}3ފգL5Zwy7~N w9͗YU.&=$  fZM \j~NX`Eםf y顖C&azl./K`y{*1TrC1/ۻ^o}kȵ\K˵0}\Z5am:"B}+DkٱU] ]H7tp% ]!ܟJxOkw`%]!\՛ ut($:ACRCf!D>++#+wݗOtp ]}tINtFp+L_ Zv)n]S+{]!Zw rgJtuBtκv4X;Oe.c Π>u Ó}rlE x+|d|f, ~ז\Oj6{UNVK)ZojRqPRJ{fZ>-{4F$KӵNMV$|RYۻSWPFFkћ0ߗ0Ps4y?MnnPdl~p'VD8|U]96=Ǖ{Qtͱ+Ƈ N^HݗTP{[ycYh[F|ƕ'>AZpzDWسU Q Atut%ROt ]!\-BW}t(- OzDW? j?NWj+mpV=+L_ vt(-#:A2F^ T;•B~( M"]YM''Jf j.DW'HWx+oW++{3D~( O`;͇RA͡~1EUή}A6Oy]'pzi ӍR%*yGɝ;iCmD6dMיR|EJ46uW %4OV4ЪYJT{UAg/Tw(S+~i+e_BED>TD' -5\=ڴQ^pClC~(=]ݵ`KiULi8E%^j#ƻ4uc̉ci@)&>ANxקR8\7tp ]!Zg%gJ:]!`՟ B;]!JOS+Gt ]!\BWvw%#+D ҕh#?"Pt=g7th9NWY+qGtUjG+@M"]YoR=+ ]!\ݛvD?]JrHtu:t{H}|{Y ŌeJzW'6{5n15ﷳ(wO#ڔۮ+^_ hV2|)1ДJvƌQ7>Mta Ҙy**\cJc o?mSGў?^OJ}t-3aH7szk*&'U3ݨZPjǣ}q>9[ӻ#azWjh4*aVuGIkn }x.?!jݑ(u?bJQj"_Xd#bKI;?ƿ%oKcpldxx}Hqx\l70ݜy6MC>~he ƣtxDQ˶*fSCU|%߻ t$blTOgg,1]2rBD}@6i^p%ں]*qvNm@.J6}n0e{K3_yS>ƫZ g-"@8d8k9&PW㲔751/Yy5 \1|f e~dIر 峵 [0աA jΰ`퓺[ ~ibK2 kVxYh/6|A2*|FXClǿ]έj:>}GD o}fgQug/;MA/WE}s*,FHmbp;5 !l"Ɓr]ܹMfՐsşσμ(37V$TpCQ|*Lp#W!2[,5k4YXJVHa<`ș;ea,E /\ &FQS0s9[g c:OfN_P۸YhuۘWЙ8gtU%jP lJ9 =EԷE!,/eX`M`iR2C$텂p,:+!H!(:X1qoُ}ls bpaoY^4ܳ*{4mg]fVd)Ja>&輄e N%qL;θkAc=i9hr4 ~Mwh).`opLΌ8;m8P?v|nڨ!X.E u&p|>Yv6D`'ʌmlY\cq]On!IEG1^3JإL㐅%!e`:FZ8#p 898;́Ü$bk f4|ApI]R.DW*ϱ14};.7ZVK߿_%`fGPYpI/i,&M$ǘ567雪?X u'leo3u3Wqa CogsPϹ >T'U Y,ŀhы%g~:eVgg

B|PbX@׼n1,22$]l?S\dRB"uV&yQQFAW ߱u8PbGx A2MiRda%*U 8trn#rGȘL `!}3(0yQޮ㈜ncaL!;#Rhv5+cw>'r(V>Bh}nG!iYCH۾Y?  >~Y,h2q1ˏyz>z=9,/'ŮZ;j˩*U ţ2_5Ot|\> 0I՛|MȴWgs7Ǘ "޻ UMY&ضjfi4}8eF퉳G˺w@`Y)Yi9#xY-&i:m7/Ke`OeyŦx*!:_wU0hhgSglsC@vL0ĩ3|̰z4\%|6K.zMw9ȩw0ÏLp%+6xmaPe<_6^rǾ^[3TQoxfMmdटrcn6V i!Oڥbڼܼߦ!,b@;C ]]5ۓ] jx}VZ53ءMf / &ƝY*괺Z^wg<_݉_`4fn\w޵q$0.ce9Y$8${8`sяjD*$%9~Ç(/SMiKS]}|l5`4p=k5+~K`l۝z'Pt6bmzmѸth(h6]|Xl>!u۳e]roI=%EmO)?VNTe-A^t 7߾o_ݛ}W&FoA=XࣁuՖTJ߿o8?,r |O * մ OrU(.h=4K[ںʎawXM])Ӊt>IH) !9+91, BE,&EHe0#wJY%tSǶ-,u_cwǨPLɘ /![%C,xi(&z39S~9'q? w ŕp ƩPhп-om(4Ej%N?@5<> boGwou-ϕsu,_Npځ76-:M*yAJ)Y" +@N;AT;cJy׎XTB 87"Q<A ;{RgȥQ Eʦl5֡56XkfmA9́ ś>>CW狺 U0zcDL4| B1nG IF'4ihc:s2'tкBcBa\`l2% Cd#ȃwI(S F&Y\b"o %!2'ALTMV"'ު#l1&΁s<B Gs]M7 !>~ajv4Bw׳yŞ?lqT-c\v7Ztڼ{{6y;/y?yrpZᖜy#EV)9̳QɖD_dʆ+y% X W|q%eqR$sEE3K)v3-e;xE,YEVY+L5[t/86t9jnjQsN%eB膨n̪(bĨLO(OD(e r|:uPZ#tD'sLD͑eA1]Bz0Krp2+w)!EH+gŠ;t)QV@BSew⨋-\|C z-JFM643ʜ\Y Z UqDrM+ *[IcIX#BEbTZ:a6V^9 ےU,IIj3V6O 0XG%5 %„)'8-GZセ,w6WdtqxwO@6.6\~3-j!'(yNqMۂ@$~qHR:eD^ 7fr*Oɦ, ՃM] ws>sm5RMVB[YB O Wg]d:bkNS30S̓?ot'ؐA՘mdqbZYgS%#sI&B Y! 4 $ %WWQRH7Ԑpdv̖s̢5v5qk05]mvv`wzbdM^tL\0NiIV$kTL10+m| Vq LȐ(ѐI%5@a&%!eUZ>&~}ؒ5<&,+8V#FAӴDJh@ceF 8a)fJnd֝H&H\L ( Qm aS+h1ףvZL_6Nշ~nX,]eRu 4@"mwlQ\M֨Us֡]":>TТ ޟh0ve|O;BP|0mV wCѻh8qۀo͈Fym.,[v $vq1l[YVkͦ5o{F{p}H뉐uV!;/lBY$Xme& o/t*`!f(x,3<  U+pI (p9KK/wؠU -]Tgu1%0YZR*N `RdGQ UMf(҉Mbht#Ѕfw]g|3jJ9&%QǶMŻ?(ȔE) g#0jQRi )6ѩoŻ@oiZp&{LwPz3V7i4=k_H,}QJ7[dl"Ț,42o,$ M_hJ樕Q+;RKL3Jhb "kl#] ;Fqk@ݩR*XV*M Y!Y"Ykc!/6L81"i掆H`d{yOrޝ4PcCjS`Y=g-,_ٛ2oPF̟1').W]ټiiُw~:P r<iqL O%a&.Gߟa*}Zi'RE+f$+wuwY"g/W E'A٭C}}ڽZ]h|W `Ӗꆅ}~ٞ 板X\_`Wszz2Ε ՏV/O4(ޯ48{>80D-j˓\ٿKթꌸ#Ngsr}q3@΂ ЀԦF olž[--ݒ)+. |\Rؤ9K![YFfIٺI\%nK8oGWEñ*yxP)Df>M0LZڡ9~Dqdxvyks7,q-iߕ}Xc3+fY? `nIz+n`|PNAmP!aUP8p0ϘhM!NAdS}PJGe@ -\FTDHjmxdF9`u B|zAd* R+c%:Ұ: ǢvB%"IK9Mbu g]؜냦R!3V_I7J_[i<ҦSC oK7W*V哈'?!jnӓ=nʞSrK4Jeu\EBmZH\Bkvw]z}gi?ضGSZ_Z7ֻMtjq1 GZ6rnw G=6쀞ZWw[n9PB;s+y&Ľnmb{MWGޕ,c'c$Z;Loi{*K=IEuyHoڍCp4s Z4Rlj:,v3)TgGwҳTsz,-bͩ-fRLrCR^j4}t:%aFG͹E01n}F"Ȥ5HAP0KnCBe3>-<V"ΰg*ͪs`!{:Cgs kH66H F j4ǂYȸ&K]S]_u%!v9qdBu0Q#!6+/B0blڌ[ uH4꤫֩}mPo ɵ0L7!7Cn,k\F[ D=%w=o=3IWnR^52O4#);_DKDӠJ@yzۇv$ JKCw(J ! ΂mc>; .G" LݬA` w)q!XЃS[ ;묗I<.#5Jgom铋^I&-6]z~_⎕ks}Z8m LPCHxs6AM?( e:xky@MJUzc<iw(-cZ{9d6F Lvә8k n/cXT,$px sr<+Rʁ "cNR\(2Ktlj.-fѨMgLg#J2eE@e)Qe, Р@~9 uZP޴cOu };ٹk1CkzN=ĜƊG YrZiTPb6uAԆ7Y$(bT v:iБ,u)c|IKCf0Vum;3pttO]%(s>C;3G#4z& V0jmPb^ =jme1˻߮S< E]OMYw}P?|,[jHeXXf=:,>.rREtQ \wUʧwH+>Vʿ UvS\ 2Aqԟe-4ν#-ĸ+]Ѵm+\tv4{/v*Zƌ "FD|1AN3AfO.A!-R`r,QxiL888tNFX85&#r 2& P\ӮZ?yT8~`ę혙/tD#ruS+ xhD" d2e\Yf>H N9 HYij_O7g2[}r=_zy~3(iLFXEf'R2ZY Qb.z IFEϮeDԉ#{z33αP 5O!056JP'g% 0-\2lpwflhpFb? 'g#_5 BvpŀW<hpO܍K^*ٜ̄֘lFt :XY.98uJ6XFbue=}Ļ'/xkW_/vo?jQfN&JJ7NеCbDs꫿&I6__,&a꽛kѨϟ8ofiW훓{|^*՗;bj;/~|RXRO?}$˛\\/4ޜo_ׇOG%5sKbrs8eA͹d]o#޴C˗P8(Ijlbi-Jw~=`#/?^o%w(+}y)k;`=^}%t9?9%g/7/[os2[B`_ pM7;#D-N{Vy6ǴwM>a/jrB%KV7^2mLIuҕ11ew1؋q/mvz)mH2^ĵy]MvQG*>zΛIՕJf: rS6f]ɝ_Ti2~Gsfx(t"fǫc_: xtL^EۙwH?{v:l<真s;]+t6nL[w-ښ(Bt1AS'zF x")-rk=<*^{ ;8X9&84E`"nJ|:Z 8~)uȀ^0J! esc6&( 9HdL :{ލ+Y3OvLʝƁLJtˤ|(OǸ8|ROH)Lo ޮ|j٫[벤^ޏ  ^]W_v0,Fzih8=;)+^(R"o|߂i,N/GXQjqEZ}_Rlj|rH½N] dUWSqWE*Rnw9ٸ+%)u=+u2kTI+])EuW]rW$]U**8wUսwWEJz+EV)gJ\t #k@`j)_ecm, N xDŽ.99^ƳRK^ lf)"?m =[;'v2 xdP|@cO ]z?7Fi\t0!)HOq5IP(AQ;B#ASzG<ij Y$9s)&a\76Yda\!ȝΘ~h~hOҮe7:l?y|Clݪ>9/M $z%-:#M,M@ 4cZ13Ȇ+fe K5Eݦ䐽%grr$KR9Ogqc(*Z:2z}+q[qV,dȻ;gYޗV#h#;Y>PdFdi R bgjiqdw_h埆sQ[|(0N#pMmZ4"tFåD0t?n 59P1K.`3z]6g25qerR]{#a3ҙ/L3c_/FMdCR[Jx+9۔h bتeLQ(J=m)~DxAv:(+@ D ]T14BY-e93OLԁE}Q/)}c0ޞnx-wzt$t>Py"čtо+.0jRGҒu xV!᲻(Jl߯d11 sPQd\9R٭`єA'G!4FE0ݏ`6 ,j_ͺWNuM<jiUInc$II/,Jom$r,3JmXO/BWOK$ԁ;y֡JU*%@ʓmh_Rp øo _JY9;x!AFSŋ%RHֳlbV?aVZ<񿗖~IaV5/ބ?s<)fpR"w$]v2x;,F٤-jvf:*=kϛ ]~>4_u R»6 c <=jnѰ5nwc|z}38+f~NBsng<65e`$/nFݫL=o:@b}Ѭqң@#=Uu4mF|)q"Wۼ J€&7.>6}-gܟi)tyt}#g%iyY?4h{K8`5KcsLߧbț#iwg\2qrNRj 'n2IdJR$]v~m39E" ѶM% +h>>\ nH6sʹ(l! TX,?F.Үq;-|:е^۟SV^M ijNϥS'8zK_qGÝ9[7^Jv_+?v^N/^Gk9k`o/gk+jm,^p4,{Rmz']oz7>fXЦi8`YFë9LLN(.&驛jSgcGʃta4 1qOB{1YmT"2 K"?;~|NjW߿ ___? LS IqCxV;ݕdwK+y^ Y I_wQUVAWe#X[hD׳B#I]tިJq,!8*cIa%GwΒ sǮM'ߗ9w2mUʙR֡{2$-J'1gR2PgGLq#aM{Bzm w% #rMK3!ҙV84b&g\)G8up˵M|#^|W^ z/ҁRťΣ/OO ^;))[d[ERVs& ,ľ.#~I3?k̞[VuTe=!ZP; x {/!\*V7*7wQD CPXn1RہәsP$NqDeTm8a|\>|8(a{VpvyɜWI!sgᛈwR%4s(>^")E):.1*R&X~Vټs T^/l:Ԇp&;[=3s>(Ԛ-o^H1LK{!|Utw 6KZe3o$I$xսս814 :2b;Q.BlKHj,GD&Fp!% Keo6+Z,p-",7-6LpvcDrᘲ׭F^J|YϠCC(MWmg.3\}z?3 -)*M8V aǙO3U3_DRgf\ƒ 轋*{,|~(W8ˉIX7Nȱdqy>r6pLVg謏)>0Ne]pv9_>&ty;s.Y.»"Wݦ >=7T~:3@ջfZ ǫ/W-jiZ@k¢mDXUAje1-J6zڛC#ٌI6sXTt4 [H4!)U2PX.QkcK),POQ5ޤRҐ2mJ+.} ǕJ\d5Y,x]ҹWՆS!8 d , 9H%(ޣ!=>$1X.UUo}X3ƚ_f%Km2ϣ b ta%!d_+[۱"H#os ڴk> :dףwEx|F3~4I|YKףaMvXdB`.SJpXn~ 98/8M&hX@ԩW sY<ڰ`1n/GQ"eYVv~خ)¢SӍinw;.dpHyQ<*Vͱ[v#I>eu:Anڤh]B{Z;6wEch!huKi9.Eu5eK-9} J=z^kƣіW[Vsއ!չb[wt<t]ھ﬷D.Is1E}#Z78MC{s>MMor{2ܝsF/}'H}7atq^r]ko\7+?mY$v2 ]̗Av$$',U]%]٭YdU:^j @D ŗ8HUfY[wgwDͮ7!R`ۣwG6)SnF!{jɛյl6 rR);7X"6n6g:4?kˠD0G\̜ :ZV'G}4Kݎ],_V6bu{^ ;$|⒔ZAPs}ѧ4wQTh,XE 养Ey;_9ekV8z#No}#tS3tY&ni/]%؉tsE_<~Ybฺ̓I WAw M_JK-xyTL["eWPl~V^T޳l⊡qױtn_ׯ>.bc؛6c&=l}!(<'.yѰӞٕPs**u8I^y8oyD#; $Ley '[nUNA (r+N+5f uK}dߋ}ԓlKYeS3JiMH AEW.ZB~.Qc(HR|v^Vل[t}?/zWgvniXgyaLi1b4)%|ƈ1hL&f(]dV\$OgfܬylbvadwiGc?)77f$XNt}e" ]O -(hª+.[Zހ5QSμ[Co-:pֻ޺қ+RzJzZ=$V3wK6]^%f˶3!I[#kMu1q첤YE%;[Q2xLbl. K@t(:#.ެ_qOGoOUVC Bp֤\T˔H)dЎ'yYJҘ5Z5JĘtDZ_b]>e1s Wv%K' ]riasjZ-15LrJD(cBxE夕Ŋ|IWb/z^y[zP%LGZ6h_6ְп` ڍ%/- v_\]2$>Lݒ?|ww5vႯ ~o뫜6~Z`BS]ڋ;h{ȶ~<8X^{4sлã|z[L{CLg]z|r h{f~F{Ndkܹ9|l? nVg톨iAx+O/%G"O)٦{mh;9J|q!oN>1Mm?:ϷyCa փQd}ɿdbe^l>^ k_!Lbw룼̮F%7(?n%Qs8[S0֜=)}ozntoh~Ll:"k{6`UCu^P$ֹ{Cy|uk†YPn~{-qjJ K~n>nQB[:KװaOK8u|"jЖJPqOνщx|Lܦ*?u?o㎖I~q6xÛߜoL/m#1%x?N>y!}spI{>~7ϲ^$/bL=Jwei;;[1h)j(:2r;m| W]x`uR5[n:-s_{қ;Q_{x|9-_]s\o#/k~DxzgW{|0F $d mqP_F~Kn`鴣=/߀Y5?_m8 c;ӳLbp)s]բz&Wmk98t_,&'Ի7[;מFBRƖͶ|n똝ϟ^h⅟ڂ;j"+#WV?17ke& 3pO={Y'p"My{:Frߟ/KQ笗(L!'F* Jc<%25]o:WOqAwړ)JۓVұYO.*k\`R1&Q5q.<&'8!{RO0U{]O~9&53Is SHZ3Z RGyltvAkj8<;&o=_;$PaIkck"Ԋ,Z˵Y Jڳ& q*]J?QRޗ !#W}]-rX1=%$B!ܵ b!jHf[ } sr FX&jB1ǞaMfNl?vZ2Ոv)[c(W{l]W!cM":UK9w c bɥ*.ԣr sH#<z* u$~"h'-損DBx(SFgd̩o <9k*M´j֣3 (Uxtr=QVkH0YG{KIÒw̮;J䋊"QQ-K蜵 ۘ.SNcO-Z+ U`ٚnB3QukwBPآꀘ9E&h5ր`Y2-JU ڵHP]j0ςjܜ9$R.i,njb@'8W 5p Ak!OÅ:]pmq8,ol912`,S Uȭn\ %D(h0 ͳ|]#^1"B.&CD2k|DQZ6fa=g3%!*X-"E|ջj`PU)`jK3 PL4c[U|:"RX<&!CLh(pu B)-i, x$BPP7CJe0MJ F|]YE15@[ѻ6·Π] | d %`!S l0%u ", M)tk eymnF(謱YHX40M#\Kl0th@J vYP fX `VR"\+4hoR S1F@( UP̲(Eyd(_u4~ 8Lc 8F#CTc܋.Pt!rhBb~9,EH(g4VA Pe 1Ane&+-FPZti-JAD:SB0('dŸKSMW:''7 MD!LHȿ{af]N&<L^hKV]Y̏2. AB6]P c› .〉XYUI8GPvQ@+uXF S!y$Ԅ8o_qCTx" D&jZeT^[eѱd˭!`FKrŁqA rԈVAY9ueL< o 16J.$]Hl3o\bU(c豣ݮ5'??rbQ0Z_;`I ZS#lϮ]J0S!v1a--Z%&zf5Be6ݠ@Mr긌1Z_tXWHYLgFMjdg eVtoPʵr'?nUӀ{9AEDgLmոD ?Ѩ\a7|mՏDI#Jp=]Uݿj+W'.\1[94N'yM 3 S! %1Tw#%s@RXZ ^$:Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@R9 p@P]) A-7"q@9+"8  "8  "8  "8  "8  "8  "8  "8  "8  "8  "8  "r@gAy>TgY s@PKzgDq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@Dq@R9 fH<'*^XzPNǣfǿ?K`p&x>eApǘ~%T_ ZHr'B/v 0j6`b7 }Pzgҳު;0Reo YYZ0&=\NP8]+m:ZL(#6z30 @l֤_!tl 0L=5Ĕ;(a]=T.1 ڍۋb;԰& F!ywd9u3 D4F)[g&|ۿܶNǟʛegZ[k RX5ZZq)$'" 窊K!)uzw؇.޿]ڀ 1L!}C;q=p |roYWmEgy]8U >l֢|%4ʅ8DLd.{L^,k=pweNZ-fJ@3qn nOrIze|W˺Ԫ iTLEŚ l.gtbxr¬~cY%~wL ׶(Mηvm|Q⧗tQ?s0p💻`:ge]T+'SgE}BrwmAlu2ɠ,32Q9!R*>il̡-+̌ENZU1V:FgU\lBȋ7pCϪ8nfb `qi:Ak?aKAgگF`_z=A2ݿg?'|X_dUeˠBͧ]h4Vt)O67 u޷IP㯡KcJ$]WMsrDGttIK_*^ls.TKO zqr! 33!֚ Y{%g%xIsY \\4[~~Yý87W Y- 3ww]VejIꐴa>Jr^BgY8gEFG'Ǵ8lkb׍29'3 <uzr˘1})ٰ6]@pvx3mTZὺ.Is(gS9oM8Rf&mnFv>)ϧ:z/יWtʧʷz,m ZWF5\C90Ɛe!e`:(-\aMZiI=S3֤ٗ}ю$a:'AϧhXS`t}ApK,nYyg'a柎nv{]7wZ.]ݸbn]6p6h\K!ـbS")FkT2DڀYqJ{El@$兖xjR t.de+ [keѠVYR^i8tvU۽ҋ>M=>o %s0:w(y <X^)mE;DxԌ+1;/cn+u#Gk]i$)2&dY)]Q xX+.>0>{WH:V{p0|2>pc! )WŒMJ@X~Oh󬟸f8,[~x/fNC)N FӒ'޴ k3pg=m~Õbab'_fc, ǻ?zo^3snx?5xL>^fg7<Ҹ{߽5MY$VMǹHyC B:oy[[f,Q8+W%0ϧu&;_;.;,L ޽|S,Cxѳj򬝥ąCN?[!L-phasw(PϏ6۶KumL},8٪{xx"Dw8g\2qp1. L4Lg`N7ϖ|$=#1Pۂh(0{PF9h>Ak4],omJB;NQwk!XզE6)>>7nRpM{ql'߷>,.VF㖯UeN'w+nxFOvdyz2V1IFn}?tc3]MayJhx5_/z~t0qme#=u[[-&I_=un?:2G%[ϧ08}~) 9/T 5MV[f/0}oG^}wћ9zkp#o`VXU¯WʋD׫ +`ܾZ)?~:zL.Y O\@_oEmqVN .܁B햣]gi3Go[l軬VrN]3w ģDJHW]TNj,,J\ LI8fb325S`bI}طprwr-I+2VDdZ >nL{|VҖAZ悀*/pY٫]ğd}rKƽ9׸UKiv>KmGKvoGbKQ>st5L1`W1{o[)'q̽"D"ar0lWWml:t*ZkhZxݦlΪsӷ A (&=3ʗD^pZfN\H˕^ʁn̖THqꂐ쏦UB\+%+.3%Ո,s:%+<ɠK?g_Σ}Z[u׎~:0. IQл$!J%.C=+nRĤ]tltlRK` t5j |@:< P3OA/NYӁp;j M5h/7iS臟'Lynvcq&<՞{ {GZW].X-pJ.6U˸ѺlTd1,4 9G+, EtƤ;1:;z1?#R{`gE%6mlB^.wrg'{6vKZҤ̦PR )-_m. U8]/t"ugr>B,|sq.NkjWC R+rpoZn<5:,Lx(RWN_cᱺ$\-[ Qq22Rĺ6Yunk}c~?=dz-\}lEGRYn)򆸂"˯Y^:@[ث-$X({4?; h⠴isr68{M4J9Yu#xq{8Q ¤e͉dxa)rɋκ}.2,+:g]YuؖJ1 k[n#GqI2ѱ;۱LIZv&n(Qv9–QU9Y'[#(s 1(L*dTBem4g4m:.}Ao/%8ev{*lb`2Vf:KUzo@dғnE hwꤐ'*y6G% ty5Û1/&W#NxJzgG`L9k(Ɨ]Ƿ=Ж;b[WZuuḞM{6;fhTTL`}pdcF0&!FpD_(xCze Y[7w]=C_s1n!jfS +ECr&(e!TvR=c t,X_}!~F_2ҭfypFOI5ONorWWwWPtbJ*f"aI?LT&U9FmX`T^ZؤΚ4'Bkʾ2QjL;{n4?⵻YǑ^۪^kvg6ZB'3X3J(]Jˢ:ìVg:TiiCJWX!E !32T,:l8%C6aCWݦ%].si\JIX{ TAQH+b  h7jJܔyLQ'ͩܔ!+g)XcUvZe8 /W R5Rf 징5}E RIBHTAUR$Nb69FsazwQǎP4~3fiHu^=nG,fi4?xϵxۊ1n#:z#t$s2t/Ŕo1=@Z`'qQߙtVSrr|ujN`y 2!z_-p>Ik@_4Xk; _& tD v(ո3ReV`ѭӞζo nPr2P!anb8=+Qѣ1|:׷Y;M y ѣվ^!+:Mɕ)<]{$L,tP|Id+E5..w~jrChBOPzߎ 8tax Mu|͟#d2 *mAժ2I%Pלį9 I!e\*HW yEU1y$t};UP~*(W:`$eRQ"cW{祩1bLjŖH#`.k#/к~5B_!z 5#|#iٛ01x\~%LAO_ogp&D: ާ 0:` X,2̱JL[ITBVE-$1db5频 *%k 1["B5h RYDIa!WU@1Cc4="-Kv5Y~Х>].>߬ߣyZ:}jBk:l׆7..f#T}<5UܝXx}quwѣYu.:ƢXlB).PX7x> Gͥ?L Z'S%x^O ItΓ"U& J!ˣ+%;gK4z4úI&GZcM499T\hCDqs:xmQߠ)7)rz1/īZx{MECC/ه,7V>XUh7u\B.WpwtRԮLyhcb`&^mY6g0a|0|7axؓҚ,71,]gru}@ln=@zY h$bP()oh0asɗ 4P.9=xmYNs~ E< '?U.=߯[MrKy:?&sj&4ZY-xpmX}zޏpB)xZ(ߜa:pG#wCkA|룻#C|0dyNīzRNS쪇\_znׅc^3uonƐۛh q1F64ms=SkG!Pi IՊ.0)Tߥ5K|ޥ5^Ԗqq3Pd6g**J)6W|kEv7`}.9OFU6<Ӡ΁ڃPQs մ5Юs:.MQ% J76;>gw~N"YXdc8T؂swdHzN;KX4HEWAZ?ŗsQfG*>=f%*)͈{ ZPHވtZӿ+7@'rK'J* * ['l=HY3[;EZLT o 1dHU+u  5#颓}aX{JQZlĀ1\%ϻ^dܥj&{6ֽ7և6K y2Kao?6Ŏ7 ѾPupaqjI1lLTZOv*G<]* hRR\G dPJ"V*/T3bXw"O|5MXOXW)J^:RZ$fÊR+^*8b9b$P!j\4h-#w}#ꤓTʳgW ??o{zv"ZD pAb\bTi [QUKid6sTߗ˨9sw$sK*iweK[c_b{1o1pAZB2iLI13hȆ%P#a0H)`aǛ=;73ӓCH:Xo)Q“e>*k.Oo_Le\\NPXll-6(Q 6!IDITmu>#1طgzI[’M9X@|!^ӵ* H*g zSoŶ}OzWl)Mwh԰exr\rW1;`A="2giTG״@I+ɶ$CMNٱO54T"U˰ȠDRQ/&6lSQ"X RC,353!Yd'Qd)W^Wqwɲ`=n7ܴa{)i1!)i^gbҹZtA .$kJXml휑|x@kv-;xngԚͭeھ6"1tx2b Hdh.ǺXI<A^?{۶BK}?6v&A|pM`KETQJ 쒒h=(\.gvfٙ #%VF&ӏqDi%&yVIR" .bhU05T+yu<`hjcY'Xr4~ף4E2 Qj4%4811 W'M5B*S$2?خe]|ӻz״زg}CzJFѡ=A0~j@^:>f#顮T2( UJjxH0D(I\ )qe.FYqI_W.o.:ςw^?i0H,axI11, 7tdFS3^=!8l:!u>z ;zɧM<"ߵy7=9fчo]? ^YYߓM̴ ίۗ?N@/ZJNo׳)(`|꧷!o:c _vF g$bKc;©v tگl\4? `8 0(C@Үr"y0= XַLh;oSb#Xx5Gi_#K=?}r3>#\EPǽ1[Mqc'ҽ5b|B!9 +&b*?'ܙ߅|H RNIΆn=mp-Sfw!+qmU=QGʔ^եQh>uGi0śl|5Ukf_FiZC]K na^ c3>I[]4£.WSypvv=01tsY G flfohW?06o1 Y&*sS0EDB7Ee9l@"@v @@9d Jc"l+y$"FQH0(DPKb&pY8 ?w_V¤\Gcʴ1i%L%0q*mGHDI$a@^udhz0ر%+#mɺk3ۅU5.YJ%K9@>Zk̦k>W oheAW9{ `tЮv/1!鑑*!^}iߒLc41TG[ AKe262o"`xČ&ĞNiɇW_.f).䶳Ljt6>Ggd/p9o^-UDžv.R,Y KB D-&ҴdN/Y9aty/γs\p=3Mgr-JB1 3YC6 >6pE^,˳)wjw'ČZs!S44jJ@e5jzEb1 ň%' ~:;+y avG RN3p,EE.7Z(J~T]x1K:`+rB}@: ]#U?>s¿ QdHuc9)+NuM3^+vJ(ԍmɉ=xej>%i;+>ƊounVQh] l|i1Ih4>Ⱦp{ą_ݟ]IiNuPXŀsr~6״4&szs ,8{յ<]>肻38̨Xpp 9E$Ђק LJN`œJnSh(ڕTf$"YoA9,f~)R{nZlee&iT5n)n,$,Ulӱ$β\l0\,6#%5f)G bd)b -+TfQ`(:&Cy-\ 8z0vEAna)^TD֛(Yn@bAj{S}؂ft t6ۯ<̬/wnB4Z{0@1UMY6t\q 8pK~r0Q /f[`%Ivx*ϑCccW6 Nn-m®'lf !1va6潇W΋\vOUGjK\Y~OEȊ m>fjt2ىdp\'`M&@B+&p&zm`Q= PIh^m𐔲һX*_2mN2dnd3c$nwc(0%HE<8hF<]OAV*3x8ERSrxP Vq@ FǡH$XY=bV k&?J5rKcIQ#Gh\a8Z,R'pL.Ё7}kg+D0?zof f`l}+nĚq5ٗ9c8ߴKe6[_"BރJ畾0E#g/[E 0XoXS[5c>rF6(̰VWeJnS1=&NCM9x^^K^kW^_ @Y͔S()y{;N(pzT@j/ٍf]9\Vs]Zui5eYs8c [HI$C0dqZ1H 06!*bsN ?1n\9}_^C8 ˚f{g0^`1{|g65m>iT7/}/d@΋N<+ۡZwiwF8 I"HQHv~6!$H"ʥ@QZ K+@]}yvU-)-m}9 !MJP #\UٺGDcU X) J=2l "SB b`77򬒢maFDeAUK/՞"Jb\=v': lK6ջdS%Tޥ*&m+i>lJkr$R^!v [yL5" ["71@e-m3rgyZTg nT_| `JclJդFL 5mTܘN/kL>L`*[*7ѱ"uf^5 4M=CP-J䔨uAT: geu=,%%dG45! 4TiR*mDbQ$BLʐ r,QF!GuEKY`Jr 'afOԞ4 ʀ/ؗ/Z]]EO'iMĬ2] TiDc#OfR/_d=Wy m .8]-]8O} $ 7ݾprOl.ɪ̼,&LWq72Ɠf $QҰ|%.im}B{JJgPk~a8i,X0HM0a-&a,] !T pcŔw'_٣ ZгA9)t{+Y Qe MJ e)Shi-p$"SetQ1ދ*Q+:|nܑsϞؤU|-vf(Jg)j_xȿY+e½ roV&DɬX֭ RY?ϴ o:s5w<,W@B愩=oFڬUؗ* Y+q/9@O˽oz'DY+!gŁ9XU(KPs]ozZA?_ /JXcNQS1Ek7КDdNyfmUZaqy\ylWCF3U;zpL2M1QPjn{Y$?Lc{=H6cĽR!&:vi^~Z۬ՐQ+L=Ox4OslJlmVuUcl֚"5yEk5SȌ!f:\;T VCFHR?`Qԭe e)mZm*]˻Y&ԊbV|IJ ZjʷP;YwB7,G{(wlO8v xS!jt$E<5#$T')2'XJ.c+Ma# 9K1竕-!㛗3rg Ki'pO;v6Yֱl Bt*|.ߢJ ~D[miJKC"\KI5W+QQN D+5\ͩ\,9)%0B|Uwn;R#(wlK'rKO8 5b~v֟up9gهtFXqQW^bW1H=}b|6(92ڬfhտ8D:?[ &Ms?;ȟc:GthZ"EfJf=s5X7,}*'oU=?&'KM_)L@EPHk` 2&0PD/O$js"\&7lPyP;+?i0Pe/C?"sΥN2 CʬJyBt ڿ?\҃&FG=hnwdzr@ŋu? vYxF9Hc$rjQ䏫&OC3ʚ5^׫/7[]feVdeWU8̾X]sy k@=xT/;? tުsPFk6O*Q ٕv4(+tj}5gXkYyˢ2x_߮~ezkV6]nk(v}/98p'7bZfE430fP.t9k߳y2n;Ү[l;}:[/w++;cyr?j- _L1xFn>kv5[eWC9WC/m) "@֦3l^l ̍ :0_v{ e͝w0qT%iV8R+؞.m!>'.3aΌ.lбF;36c ,t$Ȩ`X*\nQM)[WL&%H(w\߳Fo*DFAti2PqցKy6M*`f2_]]+G,#ޯWtKmkߕ o܇bK!lj3$n`%vBy(ClښL}7ԝr~ѬOW8ڒώN_# RWՖj Va3Iy|g ~UaKQѾ9,}Y?/;PNѫNqyE" CJV"E35k]M7,yN6cunģ8B؃HFp-mԓCMb 3>UEOK͓;}7Jxy6PpN>aJ"C _A`D?JAx1rK~z C 1|. # 0jzUCRoWT^# ٚge9Ua!0t; .kl_R_ڋt+ LVgzh5pN@?QL90J냪k)r[7gn ñEf GPbLDt\Ҿw5CZ٫;ybME8SG pH92(2Ucv{EeȊA%p?e9Hb2|ELDH(1(h 0RaCƉO(p*u[k&Ncd?Υ%.\{j‰O M{GbaS'Vp7aZrǡ~N})%$}cfG#OްHcZWpݥ0"FD) 8l>LJ%RJy'$yD9#x;x 8V uW4gJR͔b\ĞE፟z}`9-ݤç*xI4 cy g'ڇ^A na v0(U@:j_\A%31qq?gh:f in+S %^ v;%k۴w#GE|?(=7XJL+[,oHފ ~@G(yg{@Q[휔EYMBN#`\4rESY] f/ʧjkyաO*ZsSa%uygZӤ >iKmhm,IsC*80:. KX:J( ya@=11fBhNlG޴[@0yuA1j/R}h_ٱcz\xBE7#8mLA ŬUu:U/^HYMgޞS,\@;S2Ry{wCqQ1G@ W8NH~ /q]v(H|KÙ\Ȣ%`BfȀ64rDy=Hا͙!O+)xc? (3d Y}̘4ZM|` DDXp*]_hHS/1p):#?K&#S,Oe㐤w{H2 K"ֽ|YFU No}DͫKٲД !g`Xc/*}a@pnxʩր=CL=&x,U_Iv?zV+ ^U9iQ򽖃$F.yPmw-ؒitI47Pb,vn^ adzòe/~!,z& Ț#Xcl7}O ?$DZ<PB]@weX+⃅DkP<0 nv\<У,e6 X`ʢimap#  i̞ tqغjҨiʵ^88xUWpǛ$ 39vXqk4*@;OnL|P56'<5]e :P Ã[| CՇPBQǒƓv^`6N S*ZP(NDaϚh aOؼyy θ|R[nkV*N_&"*5]|JS#4x+ץ D\WP}E4Cdx (' k '8 E.6gTzA sBԀW}v!(b p8ȳ[:̖&4I42<AUd,Ayz y+['"~R-ǣ T6 go'SfL6qgmӻ/KNZrǡ~N}Bb YBS),F+yl:劢ބ*x> JHs&k~ O =TbJȝ64K7hl~$RsO2i4ghm\ s!2=Fͭa 6"B=; mS@ jtTP@ 3'&)VYPmp\l']Op+"u"8*(ٺԅs-sa?[ ЌfZP熫4CH1<{W)\ %Ay;?κnGKuu73sIs Y $'֔(MS2e8B<ʊ{En.8*7"&r*!Je AMAN$9$[ޘIQʞuy'$mr2Dchߪ^yŅ^QK(akj\/$9"ܚVz2EuQkl{m*+U\CTٿn 0xQW!0\&ZzW[\d&\1֖x* f"$ܘtӽ#)+fCH}ZiR&I7˔SAʧC!LYNں؊aAW:8y-K)N+}P',J8Pr;!" E'}OԢmNYv3q8!-NYۇ:db}44 X[-F+ɥ1脥 "+wyie.";eƇ~fy`4e^_G }xFOۙŷE_!.u't~܀3ٸջRD`Tqc,9|@^ljR )m3lB+ \͏@B3"pw|e|UOU(^Fm"U|fޕbcLx<WDz#Ko-ٴDj(#K-_}]6h1u{oTv,Xrn#jdP;J{瞬q*Y_-m/Bq d3;6 ԾC'S邹 [ab{1$X썃*,Z‹};r}:Ou+Ɯ:dV)mtW*] qv:.&\b#63U'M#1eڒr} 4?1#CKYq yZx4 c}\u ~ ɝ;!r>)vC9l1C(lr,(_G^Y O<̞S.^*{l1]4TO6G%CqE0cՏdyWk*LG_WH?ܭV_{fʔp<'Fkz;fJ13<1>}TJKYdZ&1PYi88EyELkUjrs"xoVEe`N.r^߇?rz{KAXI~?gQL-a[w-՝}?*] O3)򺺵Q O GӲBmFj'j"\A_U~N}x^1qO \ :ϑ7m],YhE?KI~1;‹[>+{BDLWCtw Gёok |)X⪻/ H2|QÝevGn b&Ǫᐸ!0uqv{gGG|Ëd޽~fuem|CiS9D-ޛӛlwʺ\lOqa_@}9c+N F U&dƛ1F~LQӥqLt+ 6r޾F}sxtdzv_#6r.R]\X9<#e'-cXKFI]ˌn%/P]>'H(PoO$#k"_uWsd ?,*@qp;}(΍<hR8̹Iɫ}yܤҊFl:OARF2y^hB5}Lj~2]٦( KGm-#RJj3^߰fN,Q)٩v#^_>[# QlavS_x$%{*8zVu, ك־ղQjLB ҙ|ֹMA2:tYܾ.c \F; 6vPۈ``_4UǨS', _nkq!<3Kr;Dd,>! 2g*7zZ`&e< 衑s|$NHtt4YiIrR"H$=~ʍY\8ltSpyv4 nt_9lU#۸TKO>q<"/HR$)#BԖ)doF6)Z fG)skigv4RlHʽ:o@*(`4s@{vIy!7TGg"xJJ4NXT4Qy"*N;Cvݵ^&L\A',Mʭ˅4إ6i.T{hDbɬxosOﲻ =|CubUo:P6Ռ?z Pb}.>^ ~w܎`b;hXeuʠ˴\u+4%ebr\Y89m1R<(͚3 bu,X^n@v-_@+KB -cojauF92lKʱ1 Fmb Bp,ܕ_t?2,#Cf,&efґԀ #e64hE|9_c9!0KDo{7PBSbRK*- $Fl/̩A P[^+iz.7+- K`QMoϘ0QΉ R^03A>8I3YM0a@+֗Q"n2sEhP&2'&O'xshA]o eVgRY.e=4%a1 dWAp@dٜT?(d \(#IVY[5C핽Qlmoq[Lֹ7 cFa/*dhHqæw`8m9.54ww COp1hG)eWw%מfgB0{oF.A^M@E.Kr'E nH*J~O[i[nd !4_ < *uq1 }oo}X};;{(649 !Dk WK, &/h0 1y[dok˟LyKZ\YB#Ub#s9CjB-b鞣 /^ x)l1[Q .Pn˜amF k;"w,!A7&dj(arH>cj15w%o ?zmǝ*%% '4R,QK`]!YRGԱRʬ2U j:Z9Kgu-}9IDuaFƇa(j ߨzE)E8xNUKÏħ\\vA1 Dzk2C/-l3?( K`K4'39)RkRGD~9]L[(q5n0'9cTjthR6P@9}OXjK0a%.ĉN +i756!`bQa}?+`/8Nل)l׊_S9HQ0k,+AT_>v.Jsjv']*j}N)e!5.e.Ӝj$po`=Y% 2ګkTQF!)˘93hԩj!L E$f!ȕXnhϴF,K/0сܧi` p7 2x R1FŚ~X81b0S$ uao1FNxia@q?7G_'O)|Qx#Cj+U-4? xvz/q[ݑ_,6ͲfJwc 9RL1&(hfUn7s瑪CV{{Q3 iVjT6l*.f/ea (Uj =9C9_xamqj*<_'h۳@<`Xmf {&Ye]4azcn QWou ϕ &LQ^xŠs#$aHs.B0i'̥RTx j}]t^Dȫpѐ) N*Q iq:9s7hkti |6Ut6c%x²ʌ21MlLJf lZY+0)⩢z7i8Eٚ)~8fQ#8|\`Tu= :Leh&2sSyޠ_5#9UZQ5Q ٯōhV3؛Oݐ7),2[ÊhbӔ-̈́bXsCePKʹ\?U_6:(;JQڥ=,˰|Hp(h@qm^ɝ>LcPcpE4R`xw':/rn󍨑Ιtw/'o d=YgBy~¹EYx5/tg" ""A z*Ж(f㦖#3!o6w2@n&/cA8-]yTA5* A_gkƝaiM`0Vr7`c5Wx@4>nK !/8&?N̔wuĔ7&'CRc(2ل.OW#*R c܁ ؉Y, Fs_ %ۥ/(P܁긃(S]"#'9n%Lցn%fC"SP|tBbqc#I;l&d1Mf*4[+X;)B6`sCRY8X-VS6$P}c385%f\|U?%GWbhd\ :hQ)&w~d,&ɺ(q@a雄\UC q5fu w5uótA3 ]uNfz~qv#Jh덋Xs\ Ep~7SG<^"1Y%BԵ7. q@4c-5)3CґqX F;W%(Sq .4| ~~ 8 *s 1ѻý%!\o'3F قKp>}tk'lC66ڌpՎUmd4hpm hp F鰽Oktj?d\-*ؐ.X2'lH滇dy_*6h Q#.EZ `p%4%6)Ty/!9'淌K3)򺺵_A䆳!Ff>?xjxT"(83@:%!<_^CTA>k87lz!@ͽpƄñeJQe 2vlX0uhZFvw@]~gPYƸvUWvܼ7>:֌Mxb)cDÃM ͸Ӌ. (bwBYY/Gk/;h'(]Q$ܻQG30r"3a=>ad gu ,Ўz1Ԟ>n?M5U`17jk,;To[(m "H1.0E9"Ao5i^,yb)yS`O |B췆u݌9 PJ6_/2pd[NwtD PF RAè`i$ ȐRZw?_1%|?$ )u{mecp{~mɦ,^l?"Y_U$񈀶wiRh ,R e}9 rqJ٥֩{6b !G04 0H]K qᏚv\+YG7%_Yr"Ay13QJuʩSlnB2a%$MvEa/h'Pwu a҂"aܿ0Ŗ(Dbe4b%[U| *z_4$ws,C=`U-tb "#5?l ֹqk%^ ]Znv zK]P*Rx%c&!|+o,"N}b*xͭ3E aN/GAz+.[݆ᷪӪ%FHI*Эne:p׹<}V#i M {ͧ;nl̦w(3ӬIX^\ ',oekj͹8nBA}VWT][3É:/\$uh0Y&$L3g45߭ .$(0W&o~L\rYC"C (BN$r D6JHUSjnQRŠxO澈hWKV3E)(F@iʠD"5=۱:6 B]F1P#L4E #̈́N$b*h]0G]n6zWʽo ܈!=y]jPr d$s2Q`*h2ǻ=a÷Z_IhWӠ61F"MNqL:#Cj3ohq.i[QH6iV9fcL VPRu=$9NщDX"y ZJ(I >] QiNI9hRE_6͸ԑ,:Scb(b"Pa`+ F4O!߶ӲHOXN#DyƸ22D,n~hu[YJ5ǜ!{wa"(%"4:uU/ tq9~{g(rN?v3&uYk%Z)S(, R-DƸDd"kZc2! ]=HrAa߫+3p-·Xz֧iKRP|rjQ)# "JЂ8PL^ |Á׾Y4]$7G^yٞ6tX= 4rwd\u W !bp~YKɹ[f5#kJ_VtKAYCES|Q8܀qfDRDތG+f/1a>wt,$RP1W~Xx1L5 ._vnUP}qLyTE9& 4 @/$+?X6]JރO[N*[zud _ ~t)OO[0KvqJ ")em-iXGq!\qr$ec7c&5nTZajXp^¥,UиlHW(_ph|Z8% TgE@ M(YiR/!3E>`Pbiɢ~Y,A?3J8)*&53OioeЌ݂tRGPpW=3 J@w1GZ'F%NVøAMTȽpB*F8h?&},{5v? wK.^ )XلrUXA ug({Hr=loBF)S$e5(1(\8'.:kY,-ebc0OFRوy o^v-:݋MCUиUpu,0so^ H㬠qiwC)=/BQ6_cm {y.6HO⨗̼\ʠ[ v,pg%_~<~h <!`^ o0fK W6NpX{6H} MDR:ATʙlqӂpf[T ;qw'c]pTˬ55!kߝ=d[A:XAw(c]􍊠@PLPiMTceݻwX$DyVиlݩedPvGȋԜKi/2o:j+Fz[-$nwZu=E[ѷXNT]o:Yk1T׈#1v U1ҲmV*m+[Ach;o5-I ۰M/ ~YvFa2 N6'0f'OLO6yS%CZXANI#NɈRʪ ~7t+sݸ Y*hCot&95gy1CVA"Ȉ쒋ϠV[,g]1ev#77r8óKNqE}v2,ϳQ50>$M+ + ~Iݗbz ⳷jdFqI~ɲZb48WךAG%6#֊Mׇ);P+Z Xͷ答[\N q2a䴡Gf#PxNY#)qHE(A!tK%X?x .nM$裨U<4>u>55ZiCNt{Cnm!'8 QT`O`roLh|3L+OvXwbT'X <$ڹ_w#NQ[}UӨ [j \Lļ;|53⼅hLHzߋ7bt[߷YB@ҡOaZ7b-`89qo\<~38/GsOpzxFkoA~?\B};r~d[.BI%FV # --0ۯXS;"+C *.FJL 6Z9N*4V+ڐm̴ɘ/7S;btC) @v5 Ӟ{N*7 *0M̽'E} ZZ%B8? 3Zk68Pu "w=jU)]'i*w _Ls;`?8uUq4u=~קt~y?O4fKӕbaHǏ8wQ%q"B GIFpAcթa3/}ķJX"M-CrX?x hPf׷hj]Q?SO)ccı kU$! $qLD[&Hl/6d9Th/}!rk'"#YE|lS7c+-Ў$Xv;)ԈLzk4Cپc˗8;HK`B2ne!`3.:Sc/CFJay36 wT@kcE%Q3n?Z u+;4n1UhL^1y[m>:sJo,G ?t[/.a<4>߆/|??hf۱Hqo-%'L'tfS9dT@3Vx <.䓽͉b A}P6'??|?)5b_~{[YD{/Xw?0qϟP_zjڬƱ > xQrI'=` PlDRbm:ɠ1S툣|)Gm0F(eb[RB>W34:*M߲ZVCi9Pu:?<NO7";P&ņVHF("ԧXBICXEX:Hf?R hKJpp\=ݚ..5LAP|ۇV L[qjl-I\>ʾ'"Dp/ <1(ҒEߓ<ٖ<*/GV1` @߀VƱIUY6Y xtNh!Gm!011KQ) B11QDAˈXI& DKGb&(?⠕t+1l-禊Ez|uBF;S죖uX%O"Qh6WfI&5b'^^J #f}1FhMo臨F=݅c|˔)2= HG) $D$hhHmQ8FQxDw,=6!.ebc0O\P=QDNv5}dɊ NWXG U hKIRFAg'.1ўvwuO3(YhRqג| *!JƬ÷Y?{WH'/0c0}ٱHMQ$[Rw$Eţ,RkTEV|qezظ(T.n!k@I&;+8cـo'Mv{n)eY)hS@s>j:g.Zǐ\=JhZ ^-H69), 8͜^krmUʵuWںj3'H* lHe ̻D((H)&2K16!fNm jԖ[Z*mf^)>?O[S,'м{g֕J0^Z!/# t~{B)jb&Om6»7`M/ǿ_TzEEս_zY*"R,|VTc:Vdm gjt-!҈q+$&IY s&*i9f؎B<lB; ^.( pPtT2g}.:@4)+/_le8/}eiv&Mbvf_Al// :7YG;ٻ8 ;<.G kߦ'_]B.|l$ E_U^|J.{vU e] /{ zp@qv1FaG`ersQ367_EY wSͰlt2n5 x %9>9~A!`ښz/*ZzՂ>cU] 2+O[  ,xb(Iysڍ/&!eE.{bOe-qg-^)q'Q.ɮXc yL_-ӥn%X.cEV/A %*a$x!888weehvwG-^ArNGt&fMVwԮjs|YdDNW:t=)uԐw- )?ff0;icCr|$Q)IZ먼NtJR`#z7n'IaHzB6jmZͻM ڪ[;a٬})lsIhp ˭WUp}wrxWκV^3ݏ J^C ká:u`Io+lDF D[q҆mK>T 5X),i]Oi0aI1'eu=sao ű(<ݜ )\´OJqS>~FtVh@= U3xBw4]񧇲d2!3puPdEg$}LG& >! M ؑcAhF546j]V&Is#gn',Xu&#oG RAj A5P0%)C:TC,dBz0u Y-iS;Fcd׳ ˑq^&A`ge"ZM0eq1 +Gh01F#Q^)Z#rj Amhj_ 8Q29ňfIcS8/7.otYUKxa [ZJ6H7\A и?llA*<1,N:IdBu$ߚe(oJvoP SJ{9~S/+kC"GmbQb+E(5sVxD54Q*s2{!VBNTiSo#ʎq'D9r@2ۈV9ݒ'dix@A54v"i(I}&_v-w ūjy3A<^ o1D?hAl8]r6'` k._^)\ `4^#\\#|`ZP y;N77J}z|h0'5O4`, !YP C[`a%yƠ뙙VK< uȔžWc<>6jJ\@/pBVH8QhA ] a=דYUxЪpzZ p`9v rp&.R9H{VгPשK c!ak17?ޔc-q:h>YxL6+m\ P A{ l"eB׏QU%ބJ Dr<7~bI->'At,9gݠzjh,yn@!tpJ)ʯE h=[@9tl2,TS/]GioGF9XCO ҘJn GodQO#? Of͂Ǝ+@)J z.?6DY1o yz~ p5#&-?MF¤x  % +{Ac;.&s\ 4{zt%>`,XJVٳpX%eR^| )5ًLGw `Yi0 1F-SRqTCc3ì\cHPJ'~{>J;.3k4>$H#FD871Xj547^PɀC)VzI|/6)MjDR(-D;?MHwkh RQ ]˸N sK"( fB  HV<ܤ]&+Sߏ۰jSqB jVCjZ^h)pDyR JhE2H-m3г~U KctQC%O>*D+@_crHhs>IA ;lMNq yퟟ*8{S/qK PH|R7 uǯdgkx締t ;j9 %R#(l\.N0TCř+~9VRdV-)8\t5OK,*GNvZ=b5*_TJUMXQ&kQbTb1fG sp-9#:FKF #j5b-÷TvTi_9ɔZc )<GoKߖL(B[W&BZ.>̥mF2dYI3*=߸ٕ$e`%ӰRBߑTI>@s @q|G"y]=CA\y֮D+=BNa8W$8 |YdjT6!e ~d z/eoɇ=Tǻr揄|:1s>`  8Ģ9^oChYtco \w'?ENǻFn~gͱJd_|ڈ'9%#kTB\K ,іD `c pȤ3,a%NSF'z{"%[D38ˈ܋o҉^5⛤B$2ôWJcA'0ێy<i%#Kx6I O%]'jӨߤ˗5QG VyH[ۮ߂Z.[BrCf{ӯn,y†oJ@ibN>E%-kh|c]#7|JyP;d*FFJ!\/hlDeN"$ٳ35w?c1iJSPOz)\,*E/ó`_5oQ'^msW %iחӇmԂi@54.PkFNu [KLn:OIQy54>d8*DeDz؛V| ˢ ٻֻ -zl\PjAZE?'qD:VkqC+z|u.vX8]|)p:GZ*~Efn='"ʥSTR Fۏq#fP uj{ Cim\}Q>(>&6}8ůw29'=]'Wq7_ouN%c OjIr,N`<4 ыI_j7|+}O~~mA\RG(2jGl8G6|LWwHPB@#%wɗMȶÓ+{!ldYLt,el8=O* ѡEhՖ. ctgzH_!ӘM;]g`eD2ae)*U`d2"2##_},֍etdqapZ;Z Yuv2)%) NE}]3|Rٴ.ys1# os{kv1*yF?^+h~p%_Zxs/|#z鲊r:z ֜s~RNj47ϴjb|>%lu(^QU {4=[ÚќFet5<+</xn>QmcbVq'Y)4 yc3CVm؉HvN$HF"LpDUY <ج;T#658mUl>57g_oC;Z)?ˆ?˕]ѧ%bHL@L *ܡԐ=2him ~᫞χRCv.',}jFﱪD 00SaD n(c;Sv.f'36{ܤ0 nBOޭ]1Cc$1Cʼ;sPvfȏGsC2|G lYC 1ZV-m;}=:%`@|r*h/歿PkxEt%`;K aP?)B>b1<̇:K t\@m6>62k۷tpa?=7*|zhkI2E}/<OJ`rTmUo`ЌZ+Baְ # {z2ˆ*0ѽKUВU|5 c=e콥>Q@r(&gZ [2[c u==*6tfHZzk؛׍GR5tFWm>+q!Xc#$XaZTm y1]W106bCl~z^vC[!gRFz0;PPաEX{hB U,:W MH&$u7e}HA+0`ŎƩ=1ЎZ w&smX[0/K@;.=ր޷Zu 6;5i΅;+TpNcx 澃TG;D[ g]ҁt`~CuCHH~ԂACuC;'Y*9vt _3Ce~NrWnxۤZ/()Ր_+Q}+kQ[m]v9 fJJ.0`8Kp~LR4.舁-x)_v̝m y&={1R\7aH+OUXrUaMѺ_V_]#>Ziճ?YhM~x s;?`c&> ʼn.QO<Ke1kbZ ΧBh1'm̕t\MPcl(JZ-%J::_V:]iQըO&:藥_^.Fj\.IQ^F򿖫1W:f74uON3mgɯ~ F R O'.cy0+0Qr1# :g9! 3avl焔VxX&2 51TʊTX'xPc*6rHrL!%I|5+W&6+xkuN& vSKGvҙ$[ AnR.›BTM+"#d!^'$kk):r‹Ԫttttt8(A:c$|\LС,$DdJL5dJ%9cDXRs(AL(tttt!  sdJMfo, 4O. ݑ2@0JR0F{W !Ţ {|K{ $K HJc+Ylkls` "Cw~#,\VF5U~hkn2RY~sO8aܼ_Gc"`1D{!lƺg$ȈgHfϮ_)}/=}髤L6$s*O)'+Gn!9|VՁ4lDgs1XW,hZ˳{Oi|l 4WO2yӜ #WUW9r1?Z(HA%ɒA ȭPB %G% V02|A~MCE#̹EC*Gʡ MmFgE5At93d}B(Kiu"0Lwz2ZnДgߍGcB̄ ȝWർ\gjl:C:n)=Y$$mhEٍ!O&Ctm` .λԴ&3wA,t"w dJ zfbŚ}4'p0Ҋ q5뉗$d֋Hb3#'*8RUT11DZ{DU[Ƞog )u+ҀqEXe,V3XfX`fJǕZ k<` & X/`֬/hA;fQة7iY YRhI*&LMI u]H%(E߇PyRUL,h FI]hIxKIk zkU6O']G?rDg"03{۩/UMݛWw Kѐ?I2V\3 kUS&SւkR;mD@VcMG piI ATB8 '`:1(UY'ᖦ.bON VgCV Uץ{hyULo0cI;Ȗ' aΘX0r9C 3RVY'B!*ђ;OZ{%$2d\Nk dKnU'ڪ&ִMͺ"zVY-g"VeDznTt `yW.BUˋeD|{Uoq ~[kF/yt~\_,Oo'49!i~߽h~6ZƋ5GYr\~\.AW v^X8O;Y!gO9^L׮{7{ao? .޼uVdjg@]i]tN g,gБ#,3NQ\G[,hzmʪ&۠#U RuC6]&PBBc([Бl4FkOZLp 98EInX؄M(F!L.|NT :m5:(;/:P'tu/3҈C-#b#vf%gS,,Q y.VY1N`ț废͡ Mp 9jK2S$$B]>nhed@v' _ {5 m5(dpdqk>HKjܢ DD+w+\HPosR2 \VdMƢ!@\hRDmLw t%CmJ\TUa6xQ;G~*moPiMK"k|KcIƤ<㣉^U^Ƃ`'|ʽ2O sOV6b2#O,,DTVNztn UA]6 ŋ~9Z^j.^~<7\WU>uzJZޟ^UOn0ҞSklF6xL q~i~gLϦs&ϗggt}}[϶QKՍ!i|k2r.a蜀WҎmRwd%ؼ*.$^XyԱVgu?ko4N!o)MZAt"npp&SͣTtf:񾋝TRj~~&X#m'>X[`c'=X'X+߱VnfF_>քs5aqf.^w%R.r|O8YƗxS5lMÓ؍1TR.&d'nM[ӻ[Qz1ݯ InMwI((pϴKaGf={$?. )mzӪ뙳o7/zWY?ލy/Gm\sThvvty>@ 3FAH?b!R+49u$deMY9ή>NJk,gzFYLKqgBn/msͣbO4romsr$0.lKԃk=ra-iGG?q9} L?`C`c/F \t+}\/ZaSMv}kޜWo.9n-E̢{Q([0Xi0TDջ[c (¬f։2;H2J?Hxc?* RW97gGY6Wy(U)r^e>xH)6-y D 5[ TdyyAhz^]-Ӄ'}'!'Dff3sFr{z+6xԓ]-oogٍ?oν+)ENlx/D^'jjv|v$̋%lܘ4w*-`ajר,2;ܐ?Jr=$J TIHCtCy}6-yٸ䐆_.?\] _Ԟ=}z 9|DʷZ(3EKZ&KZߍ>,p%W`o`+pg1V05?F ɺ]^8&ڃ? r=lM *Ž׼}K=ןjK}=%m&k9uKR{9: _ۃ97n\׹f--.E;ذ!ɻ6bx- ^NLn/jٳ&5׻aϤo֯4~TI@Уd+t[ 5S}-K~t\:wVjP9Q~[10B*A0lRÕo`#|IJ[O?0,8t=b4I;îΆd Sq2\#݈Xz#ru=V.S*4~-MѹN;AOٚ 0Fk(P2% iGvsġ(:Z\##"ɍ\P-ԠsCoƴ@:d|ے mfWh3BlvGK(/X/O |F/b*NNc09X2kpe)k];4ٯBTtQ9a x'9/pf6k1Ng'C)7CN XٸfYlGЅlwG 6ӊpeNd!5Hˁ/c|5/NObz>AGCt$bKjQ~k#SKaQqXKu6Qb(^)W>>.F[\z{bmC,}fSPxŶiĔb ˄=2(Ber*2F <=!%\ Bp"Ba]SspZA )=0"MgR s& ]4Vl1ig[,sp @eIڕSl4b' />9)QM\~-6JiFh|4S1KhŇAb}t}H4GR(Se@x V$ӸΧwR H7 .B,Gl YmZ [A e4 <Yrⁿ,"v즢%;=3<`iV1j{6@*ƽb3VX.p2 _=k$ٹh:%G ~$@`Je! (K𿽫:H}~4Vm%[7\My(%)LiHc"OH/ J) l.~yG^T[B16-6N]E@jIrƟ]ՄEeF<RVFK$kw4rwK6roǨQ\նM)#p=$jp`ҭcݽǠ"ri0 *Ri~tX^b}GƊ7CvΚ|z:)j^u: b1O[f]1I#Z++DdkZ68wɷH5H39VccRumQ,p /4 6Z_#\-G9rc:3t̬1Nlu@e^$nrm z~cFp}0 rdl|M` PA\aU2UU$Ux2w&<ث2XoT[o˵tnw-k hnGy0k<Bj+_PYCif 5fr5YZuHM氒}b7z\tCϵ~Ѹ8Ļ?3j %|F;nlA|NG˜lbl͜lyԜV^,}jp4M޺hǐ$xg13( !Cu^$Tr_.y٭ˎp̝)QiuquL'vǠ_w;~:~EeoI@[c.D!$)ԓ 9MURZ)=`|Zhƈ`g~dcf21=,g[ )=f^MōG轀46߼ےq#Z0Ƞ>9n|\+핝C k$/(8xj;Oٵ&fD"kln#seɷxSnefif[r%b(=zn2yhjۀٰ4vHxd7yy\8kJ&ĂD#4<%DEX fKD}gT!aQ|x3fR9l&+Aw(%D|ҼnoP3ԑBuh}DPDm\Zi=AG%Z@*l󯬋NvAZu MHߵ`֑l;ib{dI>Z,oCgܓ{UاȐRż}~ 9 :Ңm&S8thn !`#Vδn4 y#iI938+uG:=(Ҥ bq9Z 54q7%fz\4WUZgK#^I^8d|8m^%i5Kо$]Ŧ Wm31Ufk:_$jz>Z?:w`w+ՑZQF$`!|v2lPmJzoIHQg71bnO'V@˴#JRJN;5~'hc_^z;K~Js o $wrh{rqK5ǟ8Zr4r$L6&>/zvPvQqPOF 2nĸ7Wu6tXMp6umh.++llD=ɎFi"JW2}c>:s~O ɎDr$t>UYZon`5ͶQiWe8̊>Jq-99qM.:]9ld~3<I%HJ-XG)Dy+G(MՀ|Fȇ+w}\7(%xr?[?^_q-߫~gS5e~)3ڿjfV \gaGxz>߯4ŷ_;QbwqJU9$k5'Oue]F3qHѥ VXo7?P>޼*>i@Bz[~Aʷe!M:-ieyp!8ޓl!n"cyޟg3buurf粎بPʅ@`ޓu\ӣfCv,'#RvZlvq9o J9`fOA 7(nCb=ccD lnvGD{sc8F:l \u{. )7kJ ܞ@XFr ߃ BbƢyco}}kFP&O5H&MXN rJMƕ x TP]֌KОpЅ5􍣧3>ocL5dw)LBaO='Tm,p/[@3hbdï*{GmԾrVd Rc (APW3<ܴ{. v[[Pww٩;[9[pk'x;fњ3'xn_p̩ﶲgѝۯ{sf.w,矃WbƠTR!'6MMIA3lɩ0^%sԩV (aD7 q{IfyÚn&qCLtks{7mM^KR*lhfRyb扛띺 _ 0 " 4<ϟB"*nbX?E" In' s Σ1%e:E)Xbb$A[0<s|:lcмtۡ1+$2 t_Zq{9L@slzk0ߵutKc\~ٽA/]6elزꟿ-1fq$&_](I-L"͊ ڳkj-%5:rE=èM\C~|]=fzSGy# Qu&lw~oCg*R ~v OQk#sX(ݬy;ap2brs-%g8̸˱i͇M8J.k+f^QÐsykBN<;X,>W ͺ4! >S_6-`1ŮYgl渵*)A7崊a=q(kє'D <vI MK70.aנQi-"`D%z5fAFhC g*F`w{!bagrUw$ 3rdz<{HY9ۍˎaR嶳`? 7 p͘XHQ8l#3 (k60_G3c=$̕ c?}K\l:qfESKvU֡W2,sNp)i-` P}V0UpI/ؠ/nbw?}bj{瑷PRF?:k^>  mYb*95hZ8 `:ɽa xPVPU0_?挶_ĞOꏟ(ہJ[<˓'F?R|KADu/QȖٕC0V9uL﹙t^!=R؛I{f~,z&(;iw-^\tJv}&\j\.F0"JgFѫ0^-qHyc>wԆkR>;ǐ5|W1Ex:= b>;sȇ E>n׼ȇJg"d)3IbtFQO>8 nBgZ'!#qt:ݐqx>V+bP*vLA2p+o {ʥc {ҳ2Og/KZ oa˘v XKzPDFMP >v^׫-|U>gaaSl&$ǕX"z#hctRMUz&;S>vvzuI*у~qNamS# &Ya?%;.}]ԍ6>Q,rCb#S#9E^WӡF DNT( gH-ɚhO_oK傄h n F1,rcvjߓ񰵼 B K 9~t}''#ڧ/A,7|:!C1fX7qt&;[3 /,Q yz!EJv]:V5U<>5.5:h58A3%ʵ;$_+f[G0Yb3!% ևL!578w} +̾Unֲ䜙H\}`.! $]Kf2oO|M i8 |FcfnQMy6=%_"HX ^;x5o$|5NʔcpPӠ-PrJpW@[}M62s 5znּ`<ctn3)jsV[e;ݤ%?jV}$P̤5M1J(Ҙ1^1 T cDT1dk-AW_20зkގ064K{n +YNwy=TDc:Lw~J5P@(oݮy;YCFR%gi5 [~pE*|2Taj?A),i~\`VbZuћpVYzZLPbz/]n[Q(?`NCs/SJF[MjɄ&{c3fÃi6} M !P 6ںUO[f~~-/FN]HLwǷ#uoI'uHܷIN?xb}VGH7TK.H09-D9[3N%v.:R?taŏ޲յ fѳKNEk}QT&e0R  uպ;5'9l!,S,U"nwkތvGČo`yt힐3*[pLU HcbX.y3 IH8aj|*qy%O`00 O`'x%{MH3ݮy;̍2 !Mw<|&g"a'R-sekQ@C3(rmi[ߜ|Cw}7}f^{xkΉ>=w֜=ekOdSDsÏ)%#J,VhO%O| 3ٻ!y 2Q3v43d3F+,H3l`kFwP1f֋Wo#`}!ƨ`0$ lBiE7[tF ٽrdghAvp 4AvL$ nnCvGDc!87d'QŤd C۠ݬ|~v1wxΏ~1̅VxTmnvD\6hB_iAk1*<H%ېݼx1RLHq9'c<}! =*Gu^v!k1HqF!pޠ[H,E;g `B&=Tj ̺rUfjVR0O18Qvu?yߦܠ2Ϳ˦4O-S+ \zqh2PJ>H朕]2 T)60y/dsI)0Or\* nٲ!XAHYBqvEv  Wc8 XXq-NUH]YGٰ䚕3UȖa%}XXr j9;suNjXW f U kS=R4秨DkhԦ̍OJ#Ŕgj!%a.<}w!dRE7T0KNǺCW )`IY7\ԓ:]tg!tACخ1/zy$uFRgـF6x웟:,N=Ό9X,\Kw"O3<("<8cQ1p Y )L18sqwg9'<qN_ui{v?Oxq;g?ڎ;+I/X`KH͘立^L'P OWiSkOt6z\uNRS60`k[@ڜ0'Odw)@q]Q}[Z׿ݍEOl8lb0A~ç )bQ2[D)m0F ASQB(.6p\f87d60;I]U45Qpw6.o/ &y [Vn1fFPM kfGyr+M}dlP1xbΙ%an>%uI EzR~e̷2= 7&dfdQJمcY!>͗91S TP梙ąބIf@*pj®. BSu3riK[PU5K.\{%>a >iۜ^O?lWL$%DN2NoZc`qӌE,zs6Z-54CJNdf)㕽;d+m ]{WOzΗ?LF5r`ǭ%+W A`V(Nn +8J "!'.l6{Mb.QRRwI@MW"my^y0A?O(UdOw)Dc\g ^0Y2;5 sܒRQ[hfV|“Vfe|Mf&G{dt-(S27@R]="_*1Yl.ͪ2q ЪFu/V4R@װ„6p%aS{٢4ek3duubʍۘ | \ro]Zq":T< 2L@=27w m}leqE|$q'~\&p:.X$8.e8Ev3p{K(Nܿݞ)w͇wI7˻9[]_sҡ94͙zit6&&RԾ:J~׊= ?7<U7p/Y%TAuYDFx4£e|zGO<2hkV*nJA-4K"RJIu/[}Y~w= MهZS]NTO44key^ VZr>ܠ2Ϳnէ\2lf@Uk(q06F HIK".l-E4$D'5د,-q4eJ{C!xq^C^d?'1}/ȓo5>ټmelܾ[$uz$Y'Wl#^8mG`Q8x#4Nэ8O=D9eVz$`8A?N#JO~ӅAe~=}W?L6{gSS:e=1nlT w}{bv>uC|q_YC7p_. ` 1:&Ĝ`_9tʣ 9,U{<֡ 6dD^9^t_#%)F/~mP!ܧ]_V|aĀ%۫FJ㻽ͳ-V\h^umk 'xPcXw 2'C@*Ⱥ=,NQc 7l8U_N'F](P~1=#4>`8Z;#^qᒁ{"XM06ö:|ggᖄMIt ꮣS]t+9nv.l/x%G;ו%)b< exغQRjO1n+ʷ䜍B H>t[mە $0<Zj"Q@zmɅk&Ϩ(F$0;F0hVj#h{aGc$0F# JwS/.a^_9vC8sD \T{^ @ջWw .C$ {6'PK-բ1kKn#{n%x7^_G-Ebf /w˺/m`%Ј!W-}MTԟ%QR# G;Y j*Q/ +qِ.⛺SGIQɎ0/c#nc1B;`c݄`VAqa:bA*9À<>ujgݨkD΀E{ .[琉#Ѡv>Cnnǧ)IpG" E[ٮGm9ݤ `V w$)hGaGO<zvz)e>YO `IsBiP%x%2#6`a.p9b[i37.;܈Yx?~l݁~lݘ+ĹUE2ܞ+G~uqts=uwmHm2^@CUu{W[Iއ]k^%ɐ54 !)R%qDrT :h:YiRqQ`vk:tҎ`^wGڕ 9 ߱ciQvdi$YEz)΢8,`gQXڭ7/SvV| =VR&C,K' +k#8EviNUM.H>Kuf@;qiG0TԎ8yt#YVv`>OEIgV~t>(ilpU(5|=Zg?ă '"mMAv֚fM۱ (\XXto؅{Z8gB"(Aa ˳m7/ͺu/h18s38@͉,|> qaVۦ%=PŒ3>=jϋK;syG .,AhSZUԃUR4cX$+/5lt<$Rn^]hrml\er^*Q>p_?ΧZ@=)d0Pw bOӯ0y€,w;}fth&Gn7Ȳ*'ES0s0̮f4/ O04ֶ؀(P TV R)ޗbJkvQFD)YD?{Ge|;MdPb6{ʀ n-&n!"QntYѷƔK24GY2sU2pSыjcԢV 4!Kkٴ'a/V~gsblFu,q`>>T!@ܦoǣbpxyWWYx.kT0Rag#-dZV(aG |Gѱ8vN vҜ@[90vرcǮ 8H6 OY%{RnSb3buHjIovK3O]ʒc-62o=Wc 2E%xאͱK?P'LU$ncj2¼HGM9ᾓ(+{=v+=TN}CIߒzL`؆(e)^PZ{me]W"G Z[D ϻyytsU9ҼC|OwiágJ3Rgϗݧ5y:=zEh?ƛ,}xNz{X_ Đ+ "xT(KE&ՙk{j '߉Ⱦ`,n+LBDBF87f ޠqJJ%Prq򡧫(mT%JJT6ѽ@KѽpGt`#y0Jp>G>#*[)|1#c2B_rn faȅ Co:/ޚ5$IF;1ޥ_,M7NT&9;+'(s[75EJ[&Vh"r9lq'CI˿n< =ﶳ8z,.9'́nGEy&,&hBي4 FCZiXci4|}J+ms})/)vmci$e1=zoU, .u=ڈߤBm#t6Ev뜙;!#soUwYzcfhvN܉j~^ "C 9!%A l뜬!+Y m7Rw<K24BVB(: !ƣdv0Y$%8IhXIjNN9Ò%~3|KR]GCl|ơs/-u܁dv[88E3p۳{|f4V{{{u[Q4Ժv_[o%^.~jk4OD(FPŘXWnrhަbˀ@p͚T,Y?{\M<'0f2,^9evȿRzub>H !a۰(OnJc<{ӚKt|'vϨ~Sóq)Uәڡ ZK+^.}oÆּcs$;6Nwl:8JpH^X ;6JrS85.u;9OIKpTg=!k,K#vDXڕ4ACtH!avkM{ęo̧ۻĚԟ=Exi<2C7(LU.tTzA, (h[p2jO_Z^mQ9C(Dsp8X3diY ADxy5N݇EPR.zL]I܁ނ`}qGy;GG;xamvb>1\Cಛp.P;+ mOÞv1Au\V6.+++ǟ++8WcrYlsew9TPm%Q-s:2!|M;G¢,Lh [[*n=h Z>w9z%]GZ;lr|+<`_E88h+G7沆\Q*Yՙ06;| 7}VwsU>Z0|5>6naXt@{8tz-m/τRel nh8“@us6[9|$^w0߄c)bCK<)9b-(Ζ:l`t{ Mvze PK^I'l^)ץlAC=F+UcI2(og v/[przt˩ E~M]I~ h- :53C33`&R |KY-KoJ/@rc9=2c"bOw@կ]f}f޽b{g#sm~y8m9md]Z:}9-YEH|]s* u5yC[7}5! ȤBdWY%Gos/X+ cyV;G41߉F c,L$Ǯ苽j﮶jw$\5|%ہ5WZAi.ŧ(, xVܲvl}ps!}P{TIx}]N1ҽ| C\6+Kf2o٦ē=(1$|V{`w *)r{e: rM%ؚ@P$ѰکS4^8Z:Ս)6I,b0@8s x FT͓aٯ;Dx^-~Iĩ<:#8Չf{T|Mʣx# ZSD9=offv9apwB5+'~r?SW%0Swak/o/E'q"I$c:ѐj ?BDSa3`E!ln,H hˊO%ZDB.Mͩ~c=[_lA7@:zZrpщ`tklV'Zbc:ׄcc564 4WA4'v@avt8 ڃ8Y~pwEg-NpxO [$ (OUfN\xg2XmV= :Ժ WLb `|G}D-w-,5XU?d0` Ihޟ&CN?᥽oP]'!  '}F _yz/g=F}ʪlGS+Zc7mE%YZ/k8(ߡku&H~{[)$Nk޺^]VKk3XOmŸxEpޑT$80~|JDm\C p_f~3B9+x++ (p$Ay(ij. ϡ%;#8LHK:F^,X58KCI;dz,,8O ,K;v;iGpʭR,JXEʨ3!:.Ү<DԎcYQ G4hOǷp'V^ķyd{pFf2㪗j,Yx6 *3g.zeɢŪrE04'7ןX`~|wJ"cq%E8D]G/Ȅ(!<:fmGg( aSW7}ǟ:o=ZmO27XL|OS'I"V6>mkߏ@'o/K>x_g{y|G#.G@=ﴩJ;\硃jI}LW~x唩ޠ+Ω[x|Dˆ"UeVݹsMz#⒛h)!0nVI_Mba"hHQh=`q,cK TMb +$5Xb*XSwə|#H#tyBG:dׯn6Y}Z 20@tuݰiyʐc2c,*_U1u3mxoέVaKn<t`zQJlZ-dD]q1.Bu )*e# 7eγCǂuA\z,$}< ~ +"^̊EB brB=ٹD X^#!Ӑ8{|̇slWLB) ¢(Ք+EW,x|I60x.H^j(+ЬVKBkwT5C)~T`%Á3AJA" `?%t&P$ zawL?Ktupf#fTbP5sl"D&Tb*Me7 Pk>3" xOk.hQ4u{/YN]2j Y޷W*a1}{'`H3F2"`:)Bgg̸f3 X$u$kHuƯ`:$`d;,Ԧ4䬔$O<;A>5ާ0}N+j7^io{hN':.L\]&FHA$#F8k2(X"҉RT##*2"E wgJHJ!:Qi 'tfDꌤ afcB/)0YSjDJEưi$0bDHDL1B&Dd DV2+xkdy $hk /t-JX{.p@ݻvƮ] o`&"Yu%h좢ђM]eS[Dd&b&&6jEQ[]+ñR-UFb3>NʤLs!')u.cD+q'7hwnM:]n&Y(qjQDeDR)D'c5$`e ,Ad2Z``W`i=D\0BxROC0xPbF!BF1M@[DKn!1 R܂Q [d8 K58%)rm$ ©2G/7::5q9wu;ts R=އğ*̋vӝ\=62yeqy n9h/=i˙F}-N/ 0j͊Bޟ{Gp^rSG,5צ8_73oMkN4s~ٽ=0eFuǫx'@iR>d'1g~x]~{7*ew`&O_xnL(NwA]wvW7:E>;x?A(F6AAx ྫྷS`rx0C~3iq|r/ o@Bˣ,^one>Fz^L8M|~õg.Q%U>K,dqY@!feӅݯU 17 :Ӄ͕h խ"O@*"0ϝ =d7ʁ[*ݳx|l1$I,[XRM s+b lL-06  8ܻg8r8,d a`p ua,Di7nO$<3<J _ ɩn'/+-QMUr0hs`IDx` ՆyCD0p(8 J>}S%=K8M"!XQ%]8\i{#dk#l{;%M`dpIvw>!D[_i<_U+bL@:Dx $G +ӮN;xy<\_O'-osӸ;>b>5JuG##Df~nbq}8v[+9@m5! QM+ ĦT q@2D3=qޮĜIIaqnV`Yn x_ cM8"G[t; BkLB]y2`8X76۽nwX&X\\7Y  /u"#Ʉ.mO]|B_ڭ;']k7&{vM@\!kZ}㶒Zr&Nm͠1{/ԃAp!a]lu!zuԄ FBŚ*00*JrMdUy5QBPXԅV C"vrEk-LYB(. X,S:boЧE?^=1l;td4/e>%΍D+|ǥn}lI)L)Eyʰu(NYjFʸ*PhIfVu'q׷oA'7zʟ"_=\$c&BsDuM)&dA>մlm2X "OKR䲩F;4ã̈̈/0BAj&!PD^gI02z :bq(|~pxJ m#tI3¯0[w=NngzwvR\`~w cb4g|%/ȒݸcLEq?O6Cfa:ϲRbB 5i•X6bP"= yc؊IK HH*D(IOx8]KƂk,=;2i@Ǥ,[tt,ߋ l8_fy ,X5i9Q3`t_A $g0{<0{O|!J4hfDInu&%vFS (HQX Qyp,\cΝuAG+ſ9GsoHoXD?3ф~?ߧ]dФ||{ Hοm񓱷a/v^tj.SmeMl&n+"LFAD0#%eR"Maz9Ǥ󣿦L-]/i?<05 >m S&Ϝ$d??tE^1-C61o~o)><6}>7O^6}M6}v|s1P=|rsE|8Ν;!t+&tfd}tn}Ԓ[)&60&[,rV3U$SSLeőv%zෙ F44A\ 6QKD5Kww|wy WUoYϥφx' \neh_3#ȹ,"md퇿J#ޓTE;/ *?kyS1L=4~ &~~?>\MN`.McŔt{?~s7g Fd噅]^'+Ix삝вE'E)d0:ZH7fiJ&@Nڜ㲟xYxN:f^z X/Se*`lcƘ + nsJHHx!ލi) 31`+Cy[V8:AE1E|}TwTdtl^&Ģ{Xt/,KE_H1T$esa8F"< DHqY8j2D}>՚#IXZ-4)V S`:c=KW2TgXE XV Owgv)jHasx-VV8cR J Uy8rN ,} g֮.}ĊGKBE$FYA-3XHT[J,`ʰhPw1\|7.&d?'Ud )J=DK0Iy1%f: S.`bYF-AR f,|p;J,1y+pݡ-U ݿ"a2A$_ $ %u5U-uӄ _-Scc M։Qqc krkg9UHj8)v-$)!Ɲnb(?ڙMq g6I "4Z2SKfNdf9\KfjL-U2u`&g yy2VlN$M̾Mm,ڜSYjAY|r-o|ǕZ=V7Xm|sN+0N󝝨=߬ާBI%kGx^R3OۛU̮ F(R7`kVs0ò S(L9cHASwJ˧>Z3~W79ks>۪)ϻ|d.[iCt?-./YC\=>vFoy)k6E8?z5*7x;7;+>a f{(o+sU3ވM3ވF,?%ֈ3X#IN҅%LjY4bL-}Zc'NSgW|GVjmU_Aed\Uʠ,P2 ^DIxCRuN R1CAC$0d9+JI^уSkvlHqhlsuBd!αUO7o*ُOsB'{6i vrQlVP 'PH(KBRF50{K ~9F'Fj"5]O)#(?1W)?+gJ>ۏ-#Md!6`.%ab!QQb;!,NK7(zğ~n[/C7;HOeNR1m?g3ߞ4y]>[mFI;ؐs!VZ?V<2laPoL F.6iZwR0s LTB\9CLš";M5C}Ď+V*LPS +K8D4[KBi²&M" w\ dԮyhKJiNV pc\ECMJ}GWxfoi5 9*e6@{#7Iс 7~Oyd> :F3;4-3}sbtmsR{WlpZ$w@jzNqˍsUVJ̨ 0DAԪ9[VSܒeKNJ^ձ̌c1VHɁjdg6lGR'Ctofd|uB7@ RtٲHfnI KdWtwX$q@wg5۟3 ?ӳ 桎P >pI&_"1W^?3s-%̫Sp{mbꂥLK:#?:;cPn)Ez%>++>3|H|e$ :GZ};oոNwε }P8OBJ 5GH7L(McQ8rKy:I^y/8ykN~+3ï/l"y93]9f{dS"QM >n^x"9#a/Z}OgG5 'zjvB'1@š$3_G^t3U<,qG SQNЌ>/Oat)d377m{n \06G'6Wxi$FV{F0 6)ޛ. hX7:p)-S)(O_#E|#< D3dNg>wIBY??Ы75 0ƚ&h,ܩ6HF-e(V(üh5%/ 8P$:{ƍ /g}#ᎆVRMʩξ8`,f))'Әp.˴Uˀt@K i *(N2БRoI1-TY=ph]u|h 2.\-*/EQş6yrwgK |6=/5SlC}w{]ETIq^:sﰋz 3:{ E!_Ȇ׾>EᛩK T`@UhXg' ~?|M? tAF/v-k =p7NCh mE;Zi9`+PJ`M>;޺ssp.(LE{3Rۡ3|353x(~p7ɛ3u)oұA]?OBMCY tfr(ؼ\<|Lȭ,ݏܧ0c>i[XIGB[(R'#NRXd[_qr\0} K0VA =i:D5aMbQ(/~ofAxkĖ[vj &;O£;Dm򻲔.O..[eτ^aDw ; ^ƻ#e? <த;ЎyW[OЪr!NǺ~da:Ъl(tZǼ򀵿5aY *Zwat,Uh{GQҀl ȓq%mav6 mAu2%qsDW#Vi*If#yw/.#n5b\?_ /Œ4uze9N1Ik$Ihc 11`k [PJ {rX^>e YF,H)Д`Fp[ƜiF 85PZMM@Pt:D<8t%U\!\r}Zd%4Õq VpBeg<_)AqRDZZw#0xC28D_7=Ǚ{IHHq(FV:֤( @ڤcÅԇNDN`c")( %dq06\f"9Yb_z;v\S]ΌWW?\W k |W |_z8~Ӿ97 jj k@bL)!| 2S"Ҕf%Y8] ]l^ >PJ }($,|e>K#D-\W EɸvtaN/0-B| 拕3'&p'k@EUߎ&MgBZk&>Wsća&xsf$\4;LqN"=mgeH>zY*i"utnY>^i{@p`\?52BZ2 ~ 3&n1?ҳvZj}TYReݝ?KF>do J^?|J׾i_nʷKϖNjKgF`:Scm{ː:CfFEݚ$~I )ya|&$ 3HD8E.:V\ˣb :b+W+9-i}"bCpO v>{q&{)_4Ts!5G狕8^֜Vݎ>99uDVaAZv4 =[#PhWNCiΎv`_IðA xڃʀ%cOj2tTm;'1@9wJu|Xkx Q{c XqlJuRQ:,':Q6|UD|,#8M)ISQRi&A[ (-I)TƆL$Y ȗ-@n"03Bz~j<7Z)Z9(SzC%'\J&Z q8Mj5j`<e.rrMqV`L$ӏ@ n ܣ 5F*V3$@ޯIq:b^M&7HM"-?[no5;zJz1a`#{ý=̧wn|>8?: 8;߼^ ґtxޛaN!,O5i?p)'[X ^%T`+>͓ u#TB3n#hJô6lsv"v2 y(o0>oP%y~K iu,"]'ڲ :\ݫW"0cdr埣"fѱbv3jPU}\AZjWjwr?p >~P>SΟq}iacnw^5ϱ(93voS3Ms"K<6Jr ?,OOd3Bθfs4yz,Ws4S^ Jw~uXE_|Pod$y|VUv4KLEC2US8FlՊUjZ*s]uZLв-`-ݙG*@V-Ըw6, JI=ҺV1@h :np Ɉtdldc!0#hwfObfȢu(nFfHN ~ 崺ը53NX'N-123m&1[BI<TM,W/ΨA?<-4].YYԳՊAs®p/먬 Vl /UPjG!jj wW͞1RAhB$K"ĩW\#F9C%XHZ3d4t0/EMLu6J 1P #NldQYjkō Uk;Q~J)g+(AKN Tnk6жDێE&Hdh"SNv<}\yY>कi9arJakwJFyVڨՊyjIR_ujeu4pCWM;F{hV?cݖ 3P&Ruι~+p?.e:ψ؋ g6diQUG[m@dJVf7+r"5%, *4[e|UQg* },܇HG4Im> 9+)>̹hG/B͟T+audAR#";҈90-LTXEh}Q F} d5ĴoA֠h57MP:hU*LV[Ž04|/ qG_]K>` ;$MA0eg@ti޶g-61hW{kR-9O9ӑnм'OCdoC:֚^W\BUṹe1ǯ0R$au cF;Y+0*N@!źNnm.5_;){┰#O d8j$Hӆ׎o.!:b;pu26J5 o} 'J0߁Ā0QZ#@+ź)U|-(la"7 rIv;dƣ-EUgb 7ě3mf4jR-qKR7%)/&%ߓ"|Ƥ46aUITJMthŚ0+[Kgך{F=z8fB o!f3l̈7QJ:CH<07ֺz<6|i((eKhIJX*"e.#.b*tA$ x !MI{RM /ШGoCiC@uYH'#lx_-@at}N9EpHT2Ǜ2ܞ #ͧ*-[hA[|8 ]Ik&jRk#l=ٵٻ޶W9-L{Eq7AҴ8hc]jt$;7俿,Q7/EQ3,&$S]zߠn1Lӽuȿ[j(gQh O` @[X1㘁kO lb[fg47<Ś4Hk`NGf LԖNP6x(ِ]yu\[ל*;!OKe?D7Aq2M1b}2\I'>P&[(@}g"g5C(Lj$;XHC*9$'DbD2'GEñM'n6q,Wu<ǮӝLdO:TM:;ښ +."+F0 2^bejdRXk;68O#:]`%-Ifj:0ubPQ^o.rY\*$CŮXe.L=Q) wLs)=\ VJϯo_ Yw|vj8Es _?U3]*yꎝ?hmEߝ.{K~=^O.ߎS&qn _Ns^dAYlzwOu?[p:L.;G2?'_'7eipf| qaQxVK_,Zj:^ϔ4+3% Qx>'QIMSY\zэ]9꥟~w~w7୻9. OwOV.|}jF& O_Xg=y=Pjܽ lJ#?IJ-p'yA(pzS-ᙓϓ'x ?_|^RJxx=JnE 4ɧ?A) ?NcaoFO`f};~3Df" rϿٿ@5 'U_in> plz<߇/DMϤ¶~4ɾsVb&/(Tc_K(93Wląp^oR~I71Hci&L09X8`0@D?& si<{k<XNhtcܣ_`A]fD]FS%Z1ƒ=FBfdflex;>CHjkrV/{>@6_[v w'zЍ;I(5Xh[ߌ\3 Uw'_շ/w},@q'}/]]Ý%0XiQ7fDNܲo΃J y!9f̺4KcO+pxQZށ$e!1ҡ-6hdu0]X[Gp3_EoQR)~s_`#&d]V(D eԓ¼HݼG?e:]g 7\c s,5GȔ|(R̖ ~ZqZy?}Lb4Hz6` uD0p-s0 Zp}`))>8` זUB ЁKB1hv_ƃ`Q"7GVyfT\=hl p=,Dkt3zT+>@ m,)UPFǽǢ7^hzXHVod8t+F[Fo`CnA@rvKbI ߛ %k Y6H˓xZmF8D8 9'U$!aBXbg)&k;!'ycFзUse1"H"&LyJ9rKF xn覦6M1_2> m! JWPtbmKA(F֖1 vB, k@AR"mcGY⫷4l=44i #Nܬ̹O9!5;>ά_Z}!8Vrvү)}>@5cUrj!Ҵ Unj\~8tW+ L^JMUamoݍyw9Jbɩddi l44c廽9e?A2U!g;UZPa> O*Kܜ]LBI62f fOhiv`hJVnlB"M"nSm_ HuG #͹ C^}~ nY>ٝ3IO4'o¥w$NiVa)ޕ9M6#Zɧ߄_U+^P|oV&d⵻2an`,QXB؅CcB a`YVcvzw(nF!l"#rQ6m2dT쳢z^@}xǞo0;4;}bow]Jmsj f.<КڀQ,"*8]&:X=>  :Ecj8yD,7H3cm1S.wI!Dd6c_7n>ӞK 8sidlf*F !(^Ƥ RQt$&VLSXKY>$Ql7)/]հtYVb| 18K翽ЉQz-!_ cBtuh4O @xYU!e0 UdWG0_Lht0k%0{)8 ceXJ( t2kwwD"?'y`}xdҽ~%bVL坥ɔHi*Y(gpZPXs'})Meޝ(P-<ōPIߓeVPi&rݕ{.橏M`ģ랹Mt[ee~&jK0["ԯWRۛ_kI'︄/wtl2&kZ4,r5,W"gSյcy|<,.XsXhJ@zo0w1e&D:ZqţG/`x\[Ye%:G}&P3%ݚ'G5`Eњ|:-\,""N 1,0IElUdaDKlaJ*! CҺraWEX[vӁI\L>g{{>]vwem'rTjkS.-qE&)oSi"Afl}g;t2AqU䧵yLgsdȾӳh^L(o #A?-ADk1O Ⱥu9B/=pe83]Jd-a q}`|a A@=HW%c3)x)j(ւ6G<]W.83 i qRk%gb# g} 藿\0՛&y8Y@k>Z3~鿇i كpp:haN>.! /`5 &꓌4VZb\cNf"HB] A!c!,[Og` +5Q.!P1UQ#Q%@62R!:+!!!uBN:|qAKo$ &*U@g֊`mԄ͒y%"|:YeȊ]U1,w#ر`ho m^@u#aMW|/Sա}W!̃̂n8iul:^ t4ev,drlӶf>pWoʗbW#.1Xo>HҾyeiD0YS_ƖGǞwI[Ce{7ww,Ba;z-Qat>c/ yh|ɺ[ W贞Act?C.ˤfs,ZEeۅܫ no˂Ŧ ]`i_Phjrm/979WjiB?+^M}yݚLNf_ݛn?X .xuّOLqk0.4f;pks`kiM1N* #uH?7[!?N9e]+G^-&-l¸_1K[ImZwc+ȡ紮s7^;G9JÈro3Հ·>כѧu=vs q_hPH3v WprQشQw]N'$Tvcpq)k#۔@m'c´ iea# jw`;X8UpԨr]xqJ_Z E-7kUሧd۔Dw "ߓHȖV$榾Mǭ"eJ ؐUNqUKﷇIK:p i86jᤠe&J2لG죝Ƨ|sW 柃v=_$z)?@z9'09S)`/bf@f.g܇ ZCh`dٯ/?KwY3W=뗙kB9HAˀiO 8Q.@A9(|z8.>$;jBHIuH B C ` -/>G_ `" M~Ht AqfvX*q߃=I `Bk*x@sЬU3"-I sD%qȿ ۡ}oqH s45 *ңOҏNV'^lDroսwXSNC=$bTG!:JVqΙ3a'Q K%&N;ĝ )J= q|jWDto~ua s4o0ۊ~퍕ȥխXnbd?.m=OYp*" "`CDQH`0֘0T^BsH'$h|YDƄ 4DEEƎIHjayD!( LasJst9S>rNp#d%y׶{Wʉ@7X:KܬTDBFu$\W:Dw aˆ(U`~ wGhT m4Be(degev0c7_[M6F]&QV#m * {nL|sSsniÕcq`dgjVT h"3>~x̦n Ղ/!O6 b(No| ɨ'=U"3~] |x7 b]d=Er S v*|-5p:4]k )Xc??˷ty-_8}R 6frڭQ_'R#|(,W-|Дˎf}s*۬ջ@xnYr=H0~ 8Za ɱt?r)#Ii*!gDU Sf.P!IZ']}ruz ,a[=`"#! qUWh8Tz"zOb:bЁ|sv*Y"=1y|+]3EÀ@<2Ĝ V$> piCm~`&Cd>?\=8I:NQzT>[TĪ1WТEzWzvO Lv\0kPT2v'GEFUDsJP @{E ϸQ-0"=GK H o!/CT3*xR`JvD0TC3J>3+S7CXy(|Rnl9FLTz4"=1 PnU.T(Z|ƫ6^:/Su)ok פJ.y)$BmjS]hX꼴$r]:Rk0SQSa[uiPyjv^Bh9N; yTWC0v3YUFS;sy^iK:TS$ĺ.Żz砿]$ms_hܜ<ڂOf@69!exSQ紹y( v8,~yZ!(39jCѠ;h1Fmtsqh!yڷQ2柧{A,16b筦b)(i:ܬ ֨. 0ӲN$ŖI\MoT󰿊Ʋ$yLâw'WO Dh-8 b3HU Ѱ}٣gH8Wb"bir<!a R*k%}%=g6)JXRnIbW4% FB^HJ֖kɗP6䡲X 7WHAB;! +BX.) V.+z]*{2]ԃ؜oڽ9 ا~8Ј}ROlga4[cp-T̗5S^US \ z7:THKl)rSf%u:8+[\hE{)hwQR֤ Iz;+rC;7!?m8~oh‚^m >8Ehi՗2-ե-v?.M&/^rzx7?_ao =8իMEߋE0K4`>p${˿s+4(ZoTc>5] Ʀ,`,X4u$ IջN"oUy U9PV[G [S,KY]Cej$So% vB=BN is[z)Z{3=XL[Ӄ}*CYQNwd!n 9ڰմ9Jlos2k+ ]/kiAs&\rY ,5 \G i7OCv!u8UqJft?֘w4\:@(5xǜXR 1U)km TUGrhU$ Nv29Юt%VOe3& vS8J2֞[#GPSe lLhAH#0iɍIH`!8js뗙ďQla@Kh'PEXGn3YMn kMgnXX*9Z)T?<~X%Y~3;~+#qWuz;ПmvnMƠ.0,4L2Ǽӟ̙t-,5LI{-`z`ͬ¶ԀAEh!\#gHhh<qB;-Zb:U? K9P)]SDMi1ϷkJO1%|[^ku{[;!Ǚ8^+'a"y#)⽣,M>ύ7,KC}Q8t֪rusެ$-uћ9 *ޖ; &ݖ;G-¦2 v^kBC;sfUgZ۸2[c IzM4퇽ĺ%UI}G AQL*OY{jpQG $G3k(IShKRg6TBSh+U1gܒjJ4bl;Wټ/)ü-rR&$ w[ؿp9CAK#Tgiq֋"O(EU\s|ƪA{Wf J? VymURؗ10J(cT,Tx+sW>}^usR +YT/OA|0%YƮxZZp|o.%Y!,EGk1R듐+$ i$fwU!ikb6r$]`.T圗'j)B-JӆyФ"*B[ ND7Upx`.U?_p''ۛcL#l{|/'4~Ahꐖ`ic6ۜI{Pd>^_o>ݰ ">Lr(XŚY2(=^nv[fGes0.Wl }jhY?9~{~cvu/hi}t%fXbf,͓lyZ:-ܾY_YP-"(Y/-1ۖ&\ܺs.Ɨ|O9AkxO &yyqCױjBi%aaIɘj°%Vi,Ynij"ytVa]:ViGF2"t-IN1OKҘ!D*Rmq 1YL4wLQOB]߷o9..%~d#R?9Y?DuHp*;Hڄ!ђn.3^$o*K8cv{83шJLNF֘ȁiV MyTΜ9Y2O> O1h3=$b6&4͸ dTZ711|=AuIAn >x:>d>ǀIC'><=^> 5UJdM%(fp"DurzWܽ=ѫqwRhiѺg? ݉;,;80 1ı-wkm:y/*F%tj`EW}۶\Xѽ/b~Ftۺs9-Fs[¢L"e<΂< @ҝ흼^)Tjv!FjoNyD'=<#wk^,H٠}c)F-YS(:Iiz|/ռ&i)AӋ H;D98j=G=D3}g\ȅ">Lr8Ku!*{jo-"1M 歒4K>҉V;KG~yOvD`q"q0;~UYߧ]ɫp Q>= qi[)F74&Zk40fvt#F*'֝cY<'&CguG,k*X<uQ~m%LAmkWyL ޕ'AJ sJ3{9[_:L -fh&c{`2vdռ&ԋELO[HBww7f&sIL OqKrgH׼.+QyiJcKg[hQ '2Ѥ3>$} T.X`<O!Yּּ߲R8%ٵTmt,`DxH!BVE%*׵JѼ7HV#UR5fEP:tWY˺|ɽׅ\`U9 [?el9ÿ;tZJvXRt0Gw.Eƃ6MUV~ &mUB:~ђuX깺;bۺs+F"Zhֺs,Fj%] ވn[w -F *f8AX&*D&2Ê>1iDE} -UlE쓮!Yjԟ5+(Daև'|_D }+J{k^: xk8'2UCZ%H^Z%ҍp6Q/ʶ|`V k|؟ɽ׿ Z 0b-Bug ~dm[wb0I?_a;5ʹ8OU((o#mc*/;#*ٛWϿ+w97j=K{Ёw[)MA6:]. l:uXMaY&ړ2f QmUJb6&WQB+'"(Ro `U6; g&+E^ޗTrź,^Q` wc=RK-0JѶh3EkliHVYFH)Ey,}!>dfiìXZ@swW7@Uʕ W vYΗ~fRPӺΒ|2[[K[d}ذT^KgYr>LB/+ kp=p. ;{*.#Pqj (T-`8y]kd-7^{\'a[Qfp|t3'%u>vOp>1ӄz\&T+0QL$L J(Ғ9K#!U *%ul23Yr6 [W&.Ɔ$8Fxa,8`QȔhLAA AX+d+04?S),ъ0HQB,P8Q61D chQ,ٰ8WƍR_:C`iK,$Ld:a!aO+1qbGZ㒲6`\\g;aRK5,l,GI4u=rk؞:e]ή 6.bmWkh9w v~5*q̺]l 46`/.`\\H=lafea=?/#F]g 2,6(‘::!g?& ,d߻o3MolS>8%cm\8]zmjeR+CZ0Zyݫ~vlR|Z'4px^= I_?ȟjun\]vf7 ^w"u?c`@rޠFncphI]nuD X1ՄF=,K卙of/kx@˩#Zv1_:q{߱ LjK_cBCw/+ 4I?ލ6 KAiEyQ>w=nJec Y$  llv~gFD8[3-R%aMYǩbqpq{Dqϯ#qBV1-C? 6w#N΄CaBټWVtHH_-bj*^I"N]I߆q/vrٿaη:!uI{;Q'$Z \; h _=|o${; FՏفHxlvw:{ Q3iݳmv*CR>1UzĮbTqFܚ]OG)U*z/?a(7yV}GجD1KeSzZOy¦ $#%o_ŝ6H5,+3%?]a߾Ȓƫ4^MSǂSPEM feAhwBK+wP <|gZkZ== j\g(&[e:c p[Q)"WjF{kMrAZO* %t!'84 @P]"\3 ;t]TOQO!SSɪI,$ etE$$ѺZrU^U&0!N[!%O;"՟UjDEqI1F"UR1s1^^ڠ<ZN@*HTQji5¥!dly[pzݛk Fa>#s@Z W<jHJj$Ӗ CB8>@0 ŠudKh-.1 ,ee2h c*P8 R٤#m BL1Č;){ş+a6MEMB4S׳k3PCnD?1 As1:1 Ni x= q!pB##M77^;BؓzOI IdddjK•)"j)a-uQ)&\=\(W\hѕ*Rhh 2!QMc>~ӧV!hD'?a!37ђədl kʹ@ H)`h"YeD+9:K 7xRFQ+QTYq)|`n-FAmgV0`(p&)y&rEo,y-"2Z4S<ӵ ގJqcBXcŸjT21 J)687}ep2JsecRH-j~oPFoa$T//B+_4//՗`uį\ ߯?+H*XqbW$rUc 9qlw&h V:4hcH1 t?)d4ė#:;W[ptg=z"ٺWMkVF*$+$Z\BVHk2Фn>/*/X/O0;`ps!`O)܏VW+c|PL)X=LxhқUѡ0R5h'$ Vr[áDnGv6d𗣄h]3W//fgR. ǔʨO Dk.2ѩWѿǛp;E,%>Ӡ^)MSC98#BKh(2F4re#i"q&Hį_X ՞PHgt2a$PqӠcSn$vRjE)itM9E #TaPA⹠LPB0r ;L); ^΃ro(] fYg'GrC6P9W -Z՝2/V\%ܝxB%z+4z@PAcPNƖ$ENZFu^DpqE(QCp;#4yktr{ Di8@T`2x$Ns& j[sbڼun@5KIct u_T)j/ApÙ"kڨc멽F-F&p%2f4Rmxq:($<)< Ey͔M[b2QBB51r.NecJRbW &gm{iR=plmTHҸXJQ#hʇRA0tbѸ'GG-o}x7̀s*c"2{2M (QEԡThaP' % <8W@3:Iq%3.Qg8!L@ϔ& (1!ĊP$I]_}'Z! 8aPP8 &OP$۠:}F`xM0nau`6H( &#ϑl75/7h^RZ=Tyw+$Lo2ky`(጖;dAw>]z-ϼ_|hCnvAGz|On%NɏLu~.o Vf'}lM K͆,x1,FYB iZ*'P!t>rpܚ iMUBBS!4c^~kN]8Lfo/F|3uL_Mq{~H`3޸Wh?zqFK-enUחjц4//٫Ǜ|}aSe"O658M{`a?Sy^2y͎q_|;ZI^╝0-l|={^-y14+c1|ۓ+/'e F f!cxlO4x16^Q$S;ilV=iȏ\E[W񷉏ovۋn N[! e m)}]h^wS2UebT<ݦZ>jl9́18Kdz7jѣ׵l'Ŕ#l6(cĹ!gX擥s\)V:qqxp@]LDmi]LNԺ]LmޚqrlǮւJ I}׏ЀF/_l6_$_ e|;OwrJhX5Pwh3jo/v *Rʮ)(8W^~ݣg]7^A~ 7(gtףʔ,6ڶ IxڶV(Nom}ࡔm}׶AUMm7h-U+4O4lкֶ [ ELm;}m[+ 'mڶ6HH km9jڠu#WVC0]})oHja)RcQ|ǐ[NNrN9GI$8=q:ԆAݓ`B,. G7&?|<ǨDY BaHoTRJMaJ"DD6(yiuc UiRhDРJA#LJ"L""FBMaR_QlA r2 SxFW:m?#(hVtQ J6*-b[YPiFum@};JNb|r)ջ;ɑ],QS7_-`/ߎ>NQSs`~y04)5,âAH>_ yo{*u| 02vR nHJj d%-XK Z ! j~nJjƹv&}Kqi {}ms?ና3 D,Χ:ۖH My=97.:x9?e{+nf14/+}Ai}'Cvh0Q\ S&'-MN.W%7F{׏ZSV[O~'MNY0EӵB800m-V CIqV -oZw;Z?h+GW{#H.5eh3 O-{0p FsuJ@"E;4,e.ຉD9"RK.vzk'mX}z3v5$þ@fB 1߷zH#f3"ea)q:Ui=H+.zV8; %5tҁY`('$cƳ8HP q`ۜH ŇO&> 1>L&]|pȻć)v|ܪZ]4A#]=.)m?GIiQ,WjƸyhrH+ r4s!S!r Aá3x[0ޘZ.K'@cOD|| Z&r[7zѹZ.h_k/&x=8Y|_ls~n{;Yjp\JWTwUy_*Ap7⥢dr6Sk}ﷂ1XqYmhglrvڊ6hɶ\۴f$䉋hL)N>vCIih4TgnʊO%%䉋hL3'9f[* N3he4m햌hUքMV~DJ*:xqkQyU-Ѫڭ y""SXʓυ5+#jTTgnGoaKs_nɈVnMHу"XO|v*z۪9%99hӠK(vv'a\bwjҫg׫v^5%%B:(ѢJrfG:<8bT񒿌M>{fÅ_|Zܞ yy'F=iJg2~ 3\}Tk8&׽^H#Q!/|9(VLGL__ؾn`O>[c7x g {)TA<."gxYyhC?rqQ`>u{I |ï/[_o gW_~1N`m5"9}??q9DDvkD(.,F>E\ ;gdviF.w?dHⵟtRW~A{C W+$k%tcL-E1GP[~JtjCntuxpcK! ]'(8f]Yg!J*= TWzvҳ:\Hvg-+=G9+=k?5"wg͗&C/%p08EgbP4+,!apI|@$#%]pd:y :*.1qaGu.be?qbH,y^{RVH2+O$"*¯ڟeE6Å~?8ϙ[^Jd=g盶rGl\>?, Kum+ pm53*62'aDw,_[FB!8H٠%L4#0`: Vͩy~# [| M>gd+V1a< i)W fc9 ZA H>'P7p\%!|L`|(*"g#'@IZj#O{P=Xpc84qA@e3t8BD1ɧIDa*;L"e)D\QR'"Vs*F>q[}uOjS gj!wX)'E: ޛY1][;mG dk#TLQ, P6 YĥsPKng$/K5)E;\XH!3w^$8_VP~p3ZI#m0">Q =\;^*W6LM-8p^2;PP>TkSW\?3paq$ưqee`3򸧛zBQ- H+!m:A>Rp!|2E5BF)zZzڇKBYkǚ\^N8:GoqdbDRo zq>WSz:?ue1L)4 ^3W^-eV8-@QA~RtR#g3ާcɛ?yVTi',e8[o RoP`Kh6VSĚFqzriEO&FDLg;Ҽ:XP!CJmur" H2SwT*Պ`ڰCQeH,΃:w{9xXPb& !},wSS'iqDZZރ<<ȯ+L ܫظF9T@8QH *P]WN{fBV:M2_p+LϞm{L|9v#8\^kiMcpKT~?Z,V؆`;Wt98,2iz >T5R1]$W oJD]"s!ʔ~R X 6d>@Rk~ >Qܖj8.RJ&^ɸng^BA9[F䪹3Tj nyt".(UuIq 38fbNM w9!ߡ16u cٺaf9ZHMF{jucDc'Qsu9˶3ehc 5#2B RTNw"20zsyU: a(*HǀW6 -A6f%:r$p))|M\Oφ.voJJ5 T4ss9yYVI~NUG6s=GJHi QKh{dՃj-[2kvMPsɱ:m ͟ǔQFwwʇYJ1݀[pb%p k^5;4ro b6YN^zM{潞AȻ墈ɇqoeK96"!g҅0 ܺ9<{?y. XsYvf|oU)\+}ǎcUOZK>4~b6அHb0.fu3jaҥz6zHh 2 ='&&w2Rt. F: "L(ER#-`nx_X]06baa 3p'G{8/`97~ā޵c"iya?$Ơ;MèX*ɒ8CV%UI KRyHz>|ўD{pkh{ 1I?g~RoTlD޾HXEQ∟\=^b Пv(xM`rg0G=03l&^cSRHs˿׷WwP-cn痓\92Х YH=*# KwQL˗t`IE TcMުv=CM !8@&|ĊnI\j*ߍOq$4y1FDԻ@wiށ!^0s>H8=7N86 -&[w)*Ipijn|_X`/)u'ZNx~׌>oj5xā2*c!2;ˆ!UN5wЂ*ҹ7H`~p[e@\bSNQ8Ofj^JAW~O͋LsL~_nj" <[T mS (n=P3s$EJ1'x f)⬳ro[xVe>J25c`$U\ *J8L#}\Ǭ=t"@e1;RSɢhS B}pgSL$ᠨ$:^bE-âsI{R$ [ )#17E+"BNDH,$Hlcls'E! {OvbI}*#|p١XfMGwYm= K&T[#G4 t)4(j[H?S99T~rdPp, YccFdwK%l3bl['.ZWӮچN&k~ukdgt5+S'HyU &>AkNFGN)9\Wi<^t%Pvp>݁rSej ;GZ]N e(xu4ĩbdgH:e>)u$AzU3Zv[eځ,eNn~BztX!P:(/=ӎnSe8(LÒ&HN'H1t [ɤjqIE !LZ7t+!JuSQgYDY}2Z&F~ehc~1jT}K}"]{JVKk2[̗N„R]jK.bܭA_BcFh಼՟X▂$Q$ !m J:@g\b5?FIЁuTл1ė#j뿄B ,B@Az pږ"3?q{ ܗPs2?EB">X8j 4x%M:gM{}j]:sn*"$W<`g71cbX Jhp@Y(˪4^Kt[/](߫2.mDz4!AT18J3}`P$z\ JD(A n4v,=Hsh}ų03tiE@V|t7:̣g$ 0p9G;ZX‚ _8X;zɴx>)ށ·K) 'I7PeLt7>>M򟒘ry5/esiM߻hWmWIǺ8]r\Hwz$%G$oeb`WL`|DC`q1EaVAK+ Q@z-e2҇Bz$RC>"6.)Hɮ/ރbT 돿):"TFM2!8(L<< ο<'Jishc%}~c?' W -ԦE4* OM=j]I)cG~lp)YN֜/۵1сD!W׻ ;bcU+>X1Pً'zS ^ch"L6&Ai;8%5,Gh> s6gbŤ>U6TO;ikvwȳwTA:B ,AShW]egI%T9TM:,QlLםIw?J4Kښ*G:eT|g.&hA1EtQ}e"nЖ9=p*Rn|,$&צ5 c0y}ɸlp~wr/7FKY̓IzX0yf /5x?GGòԝT˽׃<e\PiwO^1aJ^}z\}z~cU f}RCAD9௉>~xs1&z#a:Յ?#^ߙSw)!7NAKW#hvwr3*0ѷp~0ӚoqgLD( ')EgE$x晃 % \p}kV3F/d=9F4nuU ]RtXߘU7Ng~yRz|!7ݷ>7Ɨ\@~ 7r߄YD#̝WiVHin2Qb>I+O#4h*8Bxž8fRkO3 -$(eז]SͭԐ|T*̓hTy0ךx[Z 2}Fݚe SO'wU׷^}54b^R0w=Nv9;N'.}*wL}>8w}S=Ik3si|;VqoI쾗3&Xq f.r&Oj<5vkHeTIg8}ov[q,'VW=5vK>?m-4Y8^R=u:ml?y/=qs}qܞemS72V!I30G&x_G:NN-vYYJ$H: {O ?Ɽ>xq? 2鴝yv`7.TIV #tw@-%u$fs5NeOcuz3m Ǭѣ^(EM(?\wKuqu7.]`1ijI6K.3ivO}5yXJ8>;9 x$9dxdb:zפ IsWϱ'fA38 j̹+I"-%r/c{PLױCDNv:T&U3S }4SFXpfQbBWC%S iQ)۩tZ*R>=,8Z] $1H67LBy [kubi\a[ELQz3Kge||`~8W&f:~RXFBB AF%7{WƑ ?;5UA gl+dHʉb%VI0a7U]^U,2".W0XWj'6sܔ@bcLkA.y'SGI܎ e? EagץsHFjpYuQeu:sV1rao^Ϊ_VէAm&NGNJJ3 1"؜gs꼃;KOWWӭ|ϙHGphaeXJ>}@ S53ŝ}&p^}O(5Z ?7ef vDÜ/*Ƿ#Tzv[,P|`ߎ>W<bF+-}}q]pĴ77zy H=9~6 nd대QwJoglr`̡FU 4y8gM8v!Ӂ¹1JTR3 !YyPK<]*5PmU>PG@uE*Il2a @УW\=E 2rg1MC@!5߬RjXb6. JqoPiQƷ=h  #PiI)}T 4Tϔ(KJ'F< ncq;j)Ȳ27 Dh󚆥T 4TZ~yا6+6 c?5~ B)J࿜E8ZQȲ"=9촡^`&NXE, (Σ :,& AY5"MB5 Ų&4E(exB 1<'Kd1mG@Rcfu_j+k0]bP(r?FjeXܴk|;S©n LU)qζqIS{@y<, f,ʍ ٜﭗ ;%3%9Ly|m@kdvY &mg3eDHA"s4Ρ`sLJ@xܗG%Z(9%вN$mĬEH%㤮4_"KE&D]ԡ:n-&:٧~5Ơ^^jս`uIYR8˝x',$`7D6µt%>  3BYz.(3=6њw< RB8~:R9V}*󵃨1YdvWtV_$( I! ᷫ*Y3tܮ䠓b~%a(xT 4O;&={:>r;љXA<3" Y$,Bdk1cu&uR޳-\jNIh|Ұ0Q@x4 C!ڣ-Na`ރ Z?m)Q/}`OD+%wR5LGǀaBCg_|⢿N||i<w}iDlK|EքPmV)qYBL2׹q;Mn{\> L2|>~E7зNݴ^t07}LٗcnƗoQZߕ0?~y'A3c?{^1b(#zݟzQ)ͱ.xl-l}?殢 zq+)~cX Wھ_ zFȄo@L߿' ? i\: xbc8~M.~"0e6Y\c9yQ0*EMR=zxƍ^5Ppk(a%Kz A`X(_bӸZzxu#1DG3ǣe yZ~ܧ%S[7t m2 % &#>#,y#+MǎXe/vQ_N";y#}?X~E5|ʉO~*UgNǤiӁqEqa398!w .'ZVX@. )m 98! )=8!'_VX?$eTQšc|Cm7%Kyl4ӈ 8K2/ \x%`,Sm2K= &PdfyJ*1aaeRn]1D!pN b1&^xǙ}Qowӊ<=Si㴀)b.Gsi+%2:D=:i-<5y`.=zf? 7 THo+KN{["s` ZXƔf =WSK\`<CMxvݜG6ڌi }48&O78]6 4߽~3mQfr9{5}8k ߽:yՏ\G8+>=ڲ_ϴdsݬP;ޔ77 K}0quw*Ekbw4 MzS2hSSB"$I̖ll76rll\hku +I#aI6v Miv @TB5a"-W $e*A 4^]mqq*JK Bb/ <%^@ d)m2AakW+%MWJTnTJ˿M8Ɨ%;CMRJ5jYQM7cC.)Ք`IFJVeǢnsRGUIȊLݒvUIb$2QMxWB(jfd;.W aVLN҂ߜOZKqPh6=jgDO2x bQqp%rAC%Nz晶ir"<.Ɣ, N9׈(hrc „ޜF3865k mI[[4UTCjRjDV.UQ)ʛH޶u5Z_?=a^ox@@p^P ]HZ CL5eăY =-+3hOU)BzQ4܋q@!k`>4, !3yϔ[i4_bt2bDpreA cMŁBCbHIT!P ¸Z&zM5Q,5BES)PX2МM5^CZ МjJ̸ V[ҮlRxʂI;;>`#@zޫ3>9;$I0Hddsy|q˰y#W  Gx̠ϵ9HȥaԱRlQWP t%ϛt&_Jn? =DDqJ%Ʃ1Ep9>G8#*'8R)Rs)N9$'m]ݦKqKR FfLZ?fH 17*X64n40直-nr0"{LEtΧ@`R #̰.' TQ[tEВ͉jN|tv+B -(WRcBE#ә&f`kQ΅yqT &%|hzhʥz ߵpGdU^mn2 ;s ,O<\zf<FY*CyT":e6MJ`AeƵuh1tX[ z3ӝXm ARPX" -f]H"ݒ(f E'`9rKV >A{E!fM%2Le3eO.M {N _ E>qn^C5vj_&As-: {a wt&ŧ=Ln%u2^{<.3F%%~ʓq^XW=7Fpn7, Y濥->F 9Cw `C >aǜw{Kz^b~% :Bˢ @ǻ]&g"7kŢE3(p0++!bUrcܔ魼g/8,)~N`X)i@aW8uE%¦ #lDQ2"U ^.ىv7~7{uQs1Gr7Үf$ }Fv8Ŕ?2(m$Ir"?+}rak[|l)OmIB"zL "G-8޾ \R+PNaPD :2CՆ7>äZMmm,܅JVSgno FKTI ;fܓ J8lZVX"a+bIDo/:2 (vKu hu&))Q$b@Dj/·(xHiXbE,SʐLu) U"1:X&)˭mLq*y\J Tb%XGp#?]s&/6vS)D^x[cE^,+"ȰmH౧[ O ؋{ |S5/?קU1tJ#4(J}FQ朕ʸŢ(q 0\8ߌ‹=GQq4@@((:M嘋EB*lH\MW-H%FHG)R`3 KDIcPĈ@P:-Q5~iՉ &$a iej9!cS2 RTJ"AjeY 0c5޹oT#,050A?Z(=5b@ȈߖGLHB+?  s6s͵Gq7.CTL{z=^b؋Yz0" if;ih#S#d"*&u!>g=d`\~]޹yzwَ.to. ;6cfqaVKBN,]d^O?ɛcDj܄{}o8ZǗMJ+Yxj4NδY<5Q5_CfToE Z.?O)x6dqzyLTsS~p7_H5Ʒ" g#&,]?OZWWTG47I Sp:eSpTOmb 6MkxwP_JO[ؒ+0#=58x5iy PNkdX)ӫo]bA5XU2iїՔ{e%t]/ $ ~-iX4eVS-#4IE#jd(ӧ[y:/D[kNUaqx[]euȄBtL6ՄI$+yJ9g|L̅&d J;b/QθWsYiT.H1~2~dӕ]2řOhof̢WFw͛,H>$"5C xqrA Y.>VLIB:~W$)m>޺h첹z>3v4MA72ՕSVWNY]9euUTV/l9e8br"Nӈ"Merː K%$8K0\?|>dHJbc},}|w,u#M4B!K\'Fq%%ІD-$2>R{b/j*,G~̌XWqy͒jTu|wIOi&lx@1Eԥ\Nu4i$=]&)R)$$6;9aԐH$ +UZA(%uHSAo8R:vTjbX&bKuqHs%u58X'.Ϟ$ؽ Q±bQa(A*QrOqQD9Q$;6T&`n1%1mjJ0bؚrM9X,q*ZVt@]$r$8dIP CSh%!%QH`™]+݇kpg6Pk8$=M81L'4AiKb;H$g 0g/ZZ`W@ Z:5#e-:&Dc"`'JS0Y_% DpTKyvhvX?N$\^PzOݫrQ&6% ۗF;U4Ӑ W|뤍!l( AApmf4-!<ZzѴMBe`g~1q09suz~9~B~9~{I?c9/@u!r.abh\&3UK` Q1%٩uA HC QMRkoˣVɽ$,W*"W&__O֟gYBi_^e9*hU"4u7ybtn]v}S^&l{,x#]_Onw>"[x  }@E18#Lk @" ;%ĂH]`NA)Ɗ)ZhF -r=h2Q\|={,Ugҿ})@5!2 -!Ǹ+bp+P4 b(iC?!zeh V Eï7TWypm7wAPL TA(x QC0 r)Aq4xQ#]~*()#dP9Pjc W/>Cۣ[tO (86^ b:P+ 1lKDŽ 0 HsҚy%yf&3zB15aՎ#pL`6r)Vc*eYIBp40n(r~<`n*d79>}+"W\WV9V[rlgm]2ƥO?)Bb_MBn^-bąd%y=֟x1 /mf5L7ѭUl;;2cKu"lW(]yw@51uAvCU/]v{/i{|QìL~wPW0f%VJ i#UwVݘ% B{gLobڱ»O`;?{tw_5;Z"Z͒6rC?1Ciw!;kq|uM\CLv^zAu:szmvK5IJ;1_^Omm .FK%1'SL1֬[QTu0f)L;8BS̱🦀S"9fڴ{zZ@糅K{:riȩ7G#CgNSS:KN^t<8K&TV/:j# 5V,?FaO H~4Yӵ=gɋ:΋shvP9R>K8qќxlΑqϛL<;ͥgw<(=SV3~921uK9)w;Le8KVGe o5;!+;KVrzYv"Wc#!{؞:˦Vo_{3٧KߏwʻpkMN.bx·_.&?,K_LgjIϒ MHNe&0=?yPbDe˛ CZ ͽ \K\E !"w`Nd8VZ-}y5W8 8B}>0Q);Mh?3^qj%Z>z\`}ɻt+ 3lcY;Kc|&=4kMhHfl9߬? X%CY2酻7 }O~V⡷"h ˜/ʹi+؀Wk(}:_:@¥ZhBj_k0  ( "aϜ7V fl*-V y#_ٰr(] YE>xf[Y ؂W˃a(%s{sma00j0\W Gx9Ե_9n`O ()lr@Td֢W \B;8![+=u".I':aiG#Phe)R !S z ^j䝰) }L[:,g;)ΥRZ&B:Hgx1GL.F4U VLRSK0RS}`H%;)\0\rנP*HJ~ v8vV D$ $ASAvyo4~}8?5 ,ut䮠>EN/n[ /ڍ^0uuf]ZN+v^~"^GAy`wdVᒿ k0-&Z 'E(3yMx8'?,a;(a9CH$fo(Md(3bQ +MK/\ sơU:n^0TN:5RU#x,ZHy 4e:erւESLkUj͔XkI%)D`@XҎ'Y\$&^`4x$C+,NFp]<7g_V*nت'S0.ߖ1dyy3+3B=;^YG,dW$A1 b*U\2 D{<# لl Ĥ;i'UzY>ĥdz#)k')搕ޥ;94 r &&I`Jsi}2-JW Ɗ)j7tf꺄*^>nMSxKwVq2b:2U~73Z7bm.UDnX[;cZڦjVѪmZ2&G\n=uV}cݞ2?2l#3"3Juuma1ZOTDpZPfAy,xlKry$D'2`uqaztxlv<J=) !-2u-2AsLC*e*,gX,F(oQ3 pyՂ9`_wynZ<w]CX@ "|[hĴzQc %@tJ$VKIH>00NbJ]6Trkh=A!cF:C6 (mڸa#I5f֧'IBF{K@(*B(%B!(&GrS'uF@tIRd Iʌ 5sfW3-zj*?B<74a|a@ 8ݔpv1y%qbm{.g !Ԅ=14eS# #.7DAK.EjRmM "z["-4 'Z)4] q b4gZ:帱 3T!I8E_4C<]@84U4Dd@0f+6ݞ%g3 uA|Cqgl( wj=[Nf?4L9ЌB2..W p-fn{ߥiXҴARi7a N(}7)ӏU>4gy3/U譡@Q-5NRPD4 ] OEq[s.`؞Xr/B0T 6gwe?_q–Cm ކl4@$s?W!9;3=8?(6#@ :/RpCV=/[{9~? Z_vHmGB\t"),uChpp؟d% Omuֶfl0Y}wo}s@WM~5?zv׃ݽMwşW >ŽEƫWu}~. Ǒ,=z;d> @qSP;m@*5kDZ~·A{k9a3*<ta^tXL WH}Wn(}*X.$/E׽lߊGXI4`%[ǣ$>5n S _. X0|Ӟ'ӏWOCvo'>?bOmy9-TD/*۱7 HTu{iujMvoW.>ps#rw]a7x FocgiWfY6BP ^Aõf_] γQ?(8@˾46;}Kn~k߶0.jU;|+귻ƭXp\u*t :OZ(_1_ϕMcf[6*V閕y|ߏ-ùh1֘FKSONT*B MTjxFz{d2+zL'@$(@UgИKY4Upg{1_1Xiߓ~11>gN6 \mKx=5Iu"$ @D1Q)Ւ&Olw|uF`S3ܷB΀pku3|Xqph\ P!k3>@O7mnRJg}SU/ц򗫃 o㳿֌?F%*BGi_j[ֱ=iccDu^pg/rĿAdk. ݠV9FuNJh%:˺ dKRADo6Sа'7瓊+Y]Ҳ_W}3ti)R )gE*QR:h4zYm᩸ύCERw[We&tRZk! O"m3X4 IE#Wu%F: '<%Zf2JP!DӔ+f6χd:B5P3 Q<R 3޹ud[ѰHyΑ1aZGV9rsGǝ;Gfߛ/ޖc9ݛJfrkb[I bT@k~0o7?{Wq /^oF̃V3v8k+,"" B}qG! vg嗕'BR7v3eLp z\sc4TwSޭ!ŕ(c 9ag`ss5V2'B4/)">w3={ JXR`fa8ɞ͌bE*Ufq.I[;%a8 h\n A -'f^Hg ۛ'  $ X?7N#z(OC* SSn EyE*x(26m=TL C{mw5@'C)r1oJPl8]-DCJB%[CS.)("SP8cN4\`P{)@ M,M(wBFO ܖcaJq~kDYy\ofdBWz:h/n~~+okV.aqxsHt^c"Jju^zQcS7ɧm~p™Hh}[ .6/o0SYa+? يّY4\ k=u j sBpsФ/=o>yI?tQ->b.I"~n,@۷mF'Z ln p[Ns0ƘG!4;l.~^ o*b2-B±gFQbt|t?f~ȵ+Ś?f0btշ >=x\FJଈewv~|A3)s7mÇ*㢆}߽_^hp 1 :l~ig7F ,O}c@!o?"SzY,"r ¸DcϢ\x6<"`,| ŗM`\Ha|ƂBY)άZɬ|mYCVm$*/sy]=twJ;ݎ_]sXYUwiXx&m=V{/&~9Hg7H;%,*vqP{Bq eы0__ jpgY,NeӲO) 8/!+1cmWDZ*#-6lsUQҔ흔rq-ϹgȥJ. B#"7A(.a[AoFIYbF8&lf8w<)a@ N "43bG:G;̦̦cA 笕ItZff_m6n]X""ЉLƸ>tt,$&L"sO B=$Ӵóun+&Ef,$AFLFe+阃G#q"Y~eXg`Fxb>BP7>% C˅ͱV^?!lX…1Ϸ-| 6)+C { "l\9 F$nj9KM@L 8pX܄P]NzK?Jh$kzUzsTVGn5אJVZ\`;ƒ-v3, Pb ]cb46qZ 5k`g2%jQK4)w:Lngϋʼn9,81{:āx{"J Rg7a*X/11DW-YyKdh5 } ڠ3O#m|`?~^;oK@fD;hڇ8vGM&7,O2q́Y+ qd<+-#*/lEUu⇽U Gdnx2W(?-e8aTtjnhcRKJlE~?7mnEN|}Q^O6p蠼rV6^]q/$Bfj]Vr{k/+LHR*Wi$i`)Cu:h=E>[,yfj:˺8e@X[6rw E8d_w*S10_at פ{+AS1NE3Sy Adѳ* =(Oc '"25Ži>- * 8[43(g W>}H {F9.x902eˌڜ@hv+쬸S޷7JT3t4NPe >CpQ 3Psc8/yh`?XF^ٍBXN,iE"_ƕ`Va2r%QL/^F1v>e.u/(b! +fXP+8"(ɌAZ}2C\M9vQwxn5jE_Qc+gFâxhq+/-F)N\-Rb[8NMjJ.km7*IL!zK -ڠl5uHmOK Ђv4z%_@+f#/ɗcBIutن)L5o]U$RJ&`NncУ v' -p]1,G/_ 7@tn[l]5ʹ-W]7gNdK-ȩ {sP|-(^}aO۩4AXx~ lʴ o8x>?țtDc1 3iiOH+ЄX{#/,Y.ewS OxXyv<Ϥ?Yd:hyNj|zvuQOFK5X_SKEZ_r TN^޲)Ϸ _g X{;hE (΍#T)+D"k0it1Jbhc4J!^Uq+0:!# 1C+5*uq- ^fOH2Upͪs8R;:>:APSI?Zf_@JؘŇ)ZuK1HP <4 QXt<i|w&vyVIf$fd^ɢf=&xtFJD>@ DV!R [ zKCV-`կ+KP [#,`{0[)4>œ9wRʑsV֊]M%_M{q4P@}L:ڰ3a&Uױ% F[ t4jYI1mĮɵ)@V; !z$N>mq'Ε.{-*c\" f]pJJ5 yOs+= 063hPu)׷sPM]`8 x'vp<JBp9/ !tmVj`,,G"JķSR+wſzQt~L.{|~>U?xH=/}5'\vU|3$+fXnl ѻzmVn,\F_=/a$SCcl:)(,b]q*X&+jGPZ\W|H*z.-:x4X\Q &j!S%7׳هL1 7ځ~֬7Sˍ~$:_Vфh]]XAa$HW1u#"I{M⏧pZsZdN2fTkyʒ_>^H 匆/3bӜ!f,gޱ]|՜DŽ%U"*:1g/>dwd^ZNk_FJ}vgSrg,zK΃ Xq"dLy8p\g.>x Y%I L4݇|p9 Qȟhc6Q'lƞ 8Hqt@h&DŽ'فwlCt$R/ᵀ#GΙ'}}۽yؒH&Τ,HɔsrŞ[Np;uL-I.:^e4GfvnE0vWC0]T}k@{mIPژ Z9z78FFȎP "xG .g8oB^OuBcA|ΘFj"Me82فwlC \-Iuf :Jzt+n4'Ca+&y_x >QJ7Lw$IGg¢C I<n4Os\|8դ #U{,:0^hVwePV)mGo@(LeP$yEґ\#@U%m ʭ38A^HMBg=yHPK)FtwA5DcJھ+e"H|E-7A OlѤ: xx%WwlI+[!n/%].q{]f(#cL3bsl]c%qX?QZҾ~t{i%M$?c,Yᛙ3=5-=q&J>5~F9V w$\.Z}iozO1dmlɿ{t?A}G:@f d̥$Tv shEC<)9$yeE,$z * $D<'%Zy"QI*9l߇N`~KZdRV9(pS_1^9'y:2QO#MHe]J565`klK1ܷ=V#˟(hG?R+\RDB_&Ư+DP gIϒd%>JvV>aRz !5mV[c!Hi$^ti:P[=M_)tLT"|r"|Ll٘hR]R5t."ϖJx{w+Zk& 7*M]!cT$p*Ӕ _x M!%Uȏӷ7>&M^xVcWegS{Lɚzfs˓րmMP0c_^@ԛw@v"׃XBgJ-A" ͧ́b‰h|? ٸ?7#ig=xÏ?m~?[qm/& Rt͹64Vn~wNnS}Du>_)Gwb[ٸT[)͞lghzavƺGSs=% ;q8ػ`1m[C06?)5FbNwMpD&8bL4N_kI-`{Җ7| ȏG.Og:}owm-5 }{;|י1lY7aN>u)bL< Qed́HQeʱdRj$~IWH[,w{VJ4I^LQM'K%찧R RmYN]%t.1 P.>\_9 _m ړ/Up`KI! ԀTf qÓ_N+\}:w PށT UBpl.闑@3 %#ʼh RȝD֊7o3ƳXi!T) 1_c賻zW-+]f,Q-$mFqό|\=rSxiۑrm;I➔7^ɤԤ傼$/Oɘ2o%5GWNskb P148JSK+8:D|XlOx!-viY?w> M 7>jz=/"~4&+cGQq5k4cv,rOо^\š06)(K"sDw\dXxJc,R=# =AH (Niʷ 7ݼ9I Q@º*V+R iY?;# b|)cpF%evZfzB/o>j%!vr_҉"cÓ??юn] /4P Yoаa-]/Hyh\;[dl\~qq'z~_2^'>0-"'=>;*caAJuӽq4ZY͘d:c Uǡ6m2MdVtM}w$pXo V'qԃGSWbD7RgakL"*H9FYT7@WhYaUgԛ=!lu @W#l1_6 ʉ'a zA0To{;w& _6?D@v^KKyPۚ7gYoHx|$4#=\<kw}5{wnF[)X58 9eV_/8)Pҫ[Tt8mЁtټ1BǮFMU&jL =iu!]׺(zߘ&)_x2ej̸4z}kEGZGjt~ S:-oT@VnV龧o`Aeg#q{y Jn1 W/}&J9uU{?|N6ÿbzĿk뮱?*ڣb_6LJ6K#cN PiR2_v21ÓO2vi9+%]65?c9ľd+듟 u.m^U:\矛nz}?OCPX:j]˝ʹ V)7Y%Ajr=0k #< aG@, %pS+łj'﯋ 89W;Y,2[ɡ?"UU*]{WLfGu˹ڔbg<3}5:Gu1*=QMv+A1GN^'hFxeP)r4mJ>yQ hE޶va-n J|j4e^5OVRFzW r{ip?o( c# H{BFE4>@n7-cl5Ϡ[9װ.478骾J*A}4c6Ԁ~gfppvznIj$2xxOG B|c{\n W{4dOrQȆGe H33=Jybp,J3gc1$OvW>ɽ!rUK"?EtoF0%a>K|*7ȁ/( %LPCX! Zfuƺd S~"dכTrܜ@o󊁂zrI66=.|wo&@RgS^r:e7ug-Xѳ]Ú/#sؗ,CPJ # HAjb-1ꈤ LXY% ,Ϭa^-ZHiT4.nSBzt`ss]}2Bq<8,oĵ`KȢ(7A0Ej󲕧îUk#שڨ`\J Tb"Y`P>gDJB-@)WLwuptӾt@o r.}L޵+bEw0ٶMtb[zP7~wHɶJrDMؔD3#=bgkFrgzw-Nܔ l\>5tb|_+%̄CW+;!em#Wë:)(v̤πCS ӵѿP+!^k8o(; SIsI~&8L^J.Te:ZTX 1gXgLiN{ &%0JI3ۼ&%pD2$WJ" %%Kr4PoA$.Ջu$Ѷ"%Q d$JP2DjZ%qL%ujmdy,mӓg֓J xwG ^4Q@(#hM rv ׸$r, +5.Eg6f4@Ͷ 0J;n 4`rKolNw} ; 5[3Bj[NIl`{ h(,`Fh8i{Z‹œuiPėT:H7cx4ىdI  Vco!vGA:tg=&5 h˴Z~4,A Mjڳ-"(ViOj-i 6LiBup. ++ "0*uwX"]0?h3Uoө/"9å{PڴVLozIR( *eu @e$$W6IJb0˕S4aBZcmRů)+쨂U))Ϭl]8K`F,<& >vY' ՙcWaWumik.6._TfA)l4Zz70>ڬ%g"'D7dRZ<D5Xj! 3FvJI۵9t*] 4HltJqH 75PgAoK :J!Qg6 {v8m ?6zWۄf@#4!Gqk6_zES=Mݱ>ԋ[e6Y@{L!ܠI=LbN5,׻ 'D- l!h}tq<2ly ^[(27iqX~붿p)j 7荒 :per?&ĂƼ7Tmk˒pP??l?QLWV5^;ɻP: rWpkߓtζ7df?;:Z wPJH]uT8u)hk\\Aq"z,z1BBVJH-=?TeD{4NfB:/ܟ[S?f˭eܳo?|8cFо6F:kX>Íf"yi]eo=yI w{ƿK&צ~X4lb԰ zףII܀-evp ׎k]̴6>vl5IޏIU{N[ mgy+!GdnGkvI5k$\|փ!` +Úo@乂W~Zm?oZa45<>=<nbddn#]@Io^/xق_hq:y0h9Cfл@B7ݷT:̶ ::}/n> [q1n&/qϺ`'?|LFnVrK?-ab_Z<~k_mLOBaq 9[AIsIBP\YweOΟHb [0:=MZYزM 3P`8CpmFmx[j c DZ>]gx.;oO߽>ʅx@`J\J{3UkQZ Y,XcК,`Bsɫ5\KSˡ5լ4!ŠԓsÙ)*p6qʤ~HD2:,ݡ>z02 /bK6Z`Sa 4?0Yۡb?z8ܾ/qz>~m ߾r+7,wag==^?F@vf8q0461̧H_G){ "Op""&  g@" ϵ,`((HAEᏅ((e)`Z`6‘'t>N$޵)ȵ G `nB͘R [\Hc|& $Ԉ(|!>1BnywdW 6ZD7JzDaot%`ݖ]M 0֪2v@l&EwDlsfi|ݏ  |: l,pR9.w+[K%.NƵa2l[On;!'Ƿ3 ]M`>,x՗`.ҁBj T 'n5?23lﯻ]6pYO/$R8cד)gkTii,2.RsAo5NfTc: =驈h08VcvX̬yOfok: 6e 㘋Ez [3i`n- V7a,XTdKg-VZm]JM@WP6tLŇAMP\DjU2\mAxbḎɻYQ>s[B]Ƅ**6}=-TƯ*W"Ch+ymhtK"Տ+8g{13q3Z1q7Id1sN*vgbO+Hεc+_JB(+(Dq # W,Q?&Pւ#T˭lv5Y_ O.U@X#07qj-52*Qߏ)hE*0q( e!2jUwj9`]OyF# "#8bJ",|H| \uJ&Pӂ4tiHHKI?HD&( PHT( kVfR!PH57!Th7sS5lJ)(* q]f!KEiV0>RpLwm[i6Z|kް_-Wc q@ o<3,a4){MdH<>zZI/\mWwڋZ7G*w>-+)TWx_o<鷲\$._[o~2onl}uz̀ϳO `+]*_|9`{e"5F.ѩ;N^mdyd|pmG8RG};D>lǤSXпjQ{ywۂB;QW mG>RqQSVvKHbA|j,D!F^U! pI]xWu[V׽:d[[hWG^L#V\`t_6YGo*Ki&X$ -_K9ho:-/Ǚ;ό,Xd%CïE/=FX\L}zHj'g'k28$eVAjUL.XTٶ&.%Ľk֌;GU;3=\N%9NDi6#ie"j ֌1I8 Ly+L)TEeJjm#%B{.Pf;hM2q (,VEYY5sw],mR`&$]`&$]`Wo.tr e}9؛vc z`n10`Rgw nT LhAw$:#1Y>˂cΉjLԊ<'i`b;Fvtp3 36ᖳذ=xj 05:M{sF$ qM 9=3zu5O&XAA+A 53 9р "Bd9dǓ'^S12`_j]p,AzuUB,UJmJ- JKr!#kz.=`f,<&RP۞աR0pQ)BYaL0.?a ^y4OYQA2S6 { RÅ{q^p>o} gc(]_q"o];y[N޺[6L4C6` p`4!"K,wex")b!\RV8nϭ&Vگy\vV*[1@5i5c67v5_ar%rS3 }V`1 .ݬpbW*v@j%@xpQ!- `+ӷR*FCOǿ8Ej!SJvfi#}[ b5N(0qF;]aWM-jGY7E|5A:B *v﫠~R^_|.',0Tf;WQ]Ћ2 53w/v:]g-Av:<rؔ 6RG6֡@*QB"LM[B#u'1* nO(L2 sl YV)I%CWo$v6% HDH9CMb)*VFF";8 &-D\]&l?vRM v !kM},bq֌iQκ䦖[t?:owGm$2? z\0}fgʎ-nv/p=ϱ 8n̮;XR,a.د03E7'BBqݚAY)Vat.~;)kBH,m'~6/]d72 mtW;a |&9:c, 啦+`Q`p:h 0n> ̐}<$yL]8ȕϦ^".9Ǐ]oxT2ʀ(f@Lj2$ A&Xv Xbdk7a6"qD`Љ$83+_F>ry5Ⳉ9TQaaQ',קoxThaQ[P{%qW0h jYw N/$:88ۡs}(> n;gF7^#7i@_ƙ$ل:]jǐK ϛB(^ygx&tavB򦁱Ţuw;}r O"H8tL\k`t9s:dP>6Y P(2钹tҺtb `H>[m>rq~5*۷DTVӚz6l%oNf|Vե0P؞߿狫^^|_1zh-%ɣwoNnw"wۭ//o/[^KS5/ӉO˩/Rꏷmۤ❏)l@*q|:;Q 0X3EwH'|I?Zɚ(dZ͑ꃝO?ǫ/7[.͇-7?nt 7X7Pc}:o^Zao\쓧JWJ/_'6=^vxW̠ѩ WNf;Fn1tqQ.ѬRzvt ~ b; R}dbZa9I*0'?&Q"f?ڝv`H J¢(К q,1ᴶdlma<;f<YaNAOɔ#j m\i]c1tD5CuCK }ٖXScY"PL82H4hg5*vҋkzrHXH !~dHrEG=zrq_ SF v꨻LD1 PVw=@2Q{jPe"G72q]&(8J2X>c㊱i#v;CX5 w4|;2HΞy{]>VGޖ+D}+^]ה)LYwLպk`>SL,zˮ d k*Ŋvw@)x0yKЙٸ7$vduX[!p2Wlv|tV;x>~ Cr2]Wbr"_9qi :N|S(R'/8|Ѫ9=K'fgO9[JQ]m·)R{nٓcAn>%oz4h31lWUa4 "`1hm@1$ ֌1qoA/}p6?V]y Zـs/\̅sm{Nq[5c&uv[vD@23Ƨw55LϲܝjL4膟X׭ܶ싛Ҥ"sKS.}V~]XƖhrߟ JZ}Om| LqWԯHAWMl/r h?5W|VY}ӸB[~/?Yp-W|;>g/4ghs^!% tj`rTZsl Ns>i$iZݐ"P x(4zƋgH4n}t٭3HVs3^em[\e5$ޤ"U|$oH%[.,\c]Эxlݑo[vGo'VV\㛮݁L<2!r)V@*Ųxo;!o ~X4;gǝ;xܢɛf+2.ψ?_g!_90jpQ^7s,+w?rnzMyC^)C4Y}9܊wL:&ֹ w2`% ws&lrݜ)p)NG'} O])u~"=c.#޾uB( e@eŎT{0}.nnށ; 9ZGe].jUe8epZk25_ZuGmS%cV+ LNtWuz{JӰ8du\mVA}0xse4O\4wE~& PrP.zwp+읗`GSࠕzW?k{Y@gZ"^`HZ?x5TA^[ ,na1QSfm`)K2> rRZEwh)678RejA/$NӔQ?ϫ_n+ӒE&rX=\cO*G@; RῬ-(jj gTK?pqJ6EՇ,ngFkB#e!DP+rc*͉l|R=,\yp:8P؊T eiIОZTucl((m0`#ܦ;XCSa>C|BNT$3S.4BKEt%N4*ZsFET{N-CpL^oV86s'AX#3##)'XixKYXi=+KSSP< v{` ɡ'fw+XRr֐e8f[} ƥ|͆{Sgy3h #GiP. MZ>gpT#QВk #&R(Q[ դMћA槶%ҁ"]~i?xЉpEQA^7~"} VktQ_pjȺwK^W|m @v-=p#A2Dߓ;zSK9~bZ}ŕ9rkqbH텮78taD4؃}RA]h`ELkbaBY>Eam ;Eey>N:CL*ъBC@ZئlO͟1jw~Q<\E (On˧AjOnB7QTW WLź6U䗛!b@iim395sÚ0!Z2&<*w&K%NnpD-/} ua=+"=h. եu^IZwب>BѾ`0#% )\}WFS_Yk&'S_Mr^jmސlބA$W2(ȧR''Yy \( \_ˉ#l3Ƌ(1yM'M.'|/??sŬT`=v s<m lލ8.~X'49sÇ95~7$f7~:D56tF69} MNd>SuUW(%}b:3 h9|O;;`3B(Oh:!&3~U}Xw@e+Ҏ<ں N)qFm+52o0%yx\xuɼ<hB91%2ԤFZXYmFWA۬4LlsF6UU :6J JS-3;n YH_kDJhEb0ʘ.w˭xѦW &MAjۭz^H#-/Fd \G[+,qMqvxLqékRFgPpjFjj]3Ɔ~a!8w0OGbt\Dg&Ҙ% yg ݠ˾ 8eWͳu#-⁽_:aɘͦĕ*('he֤pڠ(/16$90#GY|ڡ]4lp 2%]oJ* Y/->%X>l)6͙CVҒY#>CΪyKn'_@)' _x KsU7j+*.zeH8.%\eețZg <8SHU)s}ȧe'tn4\ w֊;Wo>Cs:!ΜAӈAS|gd#tZSSu+/K9RֲvUцZ;F՜YbrO-D) 9&]zC3yxjk( #@I) 0/'ƄHk5+u &i&*Q׷ C&pDA&?*YB9;Ih5@F`DG U+g`?RNgn[EwH/_#/p8ŝE7VZAx/%Nb GWjtؼ|_8gK!O+][ KIj{pm_UVdruipZTp6y&v׆ڀM `%Ι73GNhC}cTj@$F3XFhYq`e)}84U&q:$~nsNBot},FT?~lmks^r6Fdh[k\Ù_6/rHcqDDTiXH㽎&"b0,ϋ X2N ֘Qҷ^DACgp{7r np4-&zӢ+tWaƈ-%=:蚞 _N}Nr)҃Ip;NavlU…h$Q4bLq6+NbkΎV`<:*Y99[Pn +9ǩkJ0?{)RQe'xXB-g Q "˨p:{_o[8pb˨ɑ)rכDڨZe֨a-@Su]IysUD*HkTY XpSU!yF ?qj'-IKvRVx\X(h$i *F`Mȼ,|T$VFu 0^];hB/a1h-s"FƳwɻp#Ak >L #lI [P-YѾ`= ԧt:ӚB$whz>iK)ĵT<>+Ue|=6{;/ p8HBR-BBp`L[ Dax#v)GPoloOuIֳnda0aoCf}2}FRHaguGSp mA6#(,҉v߿Mq6(kw6zE//Q p||=םr=w^mN_rvu_corO4a}BA[:ۻ_ss7um7{%mG|8:sV#ós]c3t>7Q8-BIC7~{8MWbrMŤud(g\XɏnS8K?{Xt ƿuO[84w'=gWcptGsؖwMP.[lk}78pI(Qdl'UIqVEk, [LlTٸcSTW 5J?{!d}S~/f I˭ޫnKmO]7Sw ?'/~˺qX9vw7N[[eT> C2\'?|^9q ͻP}p^O6<ĞM?[xѝ"?uؖyA9U=m BK Khz6og>2YkB@l?ĽKWÛrs$HAkOV8l0Ƙ/"!FSyx`k/U:D^!|LHLnWHpWHp |- A緁T.bpQvO{.5}> Ƌ7ke\HZI{6=5}E]߬YmmW*I̒_2+_PmaZƀ| ,mn6kZE$*rAIƌX-VNЈҎ1,q$` t.W./9=Kt,ѳDGϦU[]KL+/iVUî"`IXi!-顢6xp  FQ l8-nH8JKWVx@CH+<iuf,`i ggJʖQM01Aqah%iIz]|NY"fH%"i6M$"ߺZ͖!YƳ&H42@=6QF8,50Fu:L0 ?N oʥ[*G.,ф;x@B+HdU-Z ^Ϭp9EL3TiUpOԄ@W ;(Sr8n i@e!aX1'm,a,)YOewFR#)R Aj!Zj\§A@&D75qT@؄t.B&B0d!QVGiqmCW/ ϐ8cE_2I*9q^R.@/ Cl3̇-@EEaZKJb]uU[ȗ]N .Q`VW Z9 Ӝj:Twu00xd܂b"RS:fXȌKi#"cA4$V5X+C3lT1 G4y 8a0HF2dUFgT{Jn[;`K8CE, H|E]mfV\@$zjJZ@(+@ S)\& qԊab=&habfa=/열 Txfߜg6?h%!gJbA2̘RGGoA4GZ#_WhI|7DC3I%M檗bm"%ʊC|J)0zúlԼ I+Cj˟Qd[UgգGW a|0\ܷ}[TwJB[dj 6&!D }@°B1!!gK 9N8҇ābЦ'1At:ds߉,׾O1+2'$؃jƅƟo$ xh̎ =Ԝ||bjm?)/iLI/>NQo^wCl=ye7*J~t8|␝f[ɧKnΛyz!).# j?pz~n VS'>/{g\ۤ!7SkGGg_ǘS;#JVw>Nd&0WNrF|?*%X(n`E ^)zXIp, >:O6J/CYbseF;'Ep Eո aE-1Mo6-x[{u\&UZjR*wmI0;~VW #,IF~ʲdIdIv}I٢IJ ۤH]uOU*Z Dbv\H}oa>LMz$@eB< ~9/ ".$:eF$75)c,E 5aP+Uo*^c@ s8S`U`q n|Qծɂ[ VqpZҪ-'(LcKprxq/8g훣ۘN8d k9,(T0|Ɓ0-/BeնIo;g`ݔư՗Y*ln%`c޽3n`Ni>\ܨ)<>J+,̨?4³/N˥o.N7n]_gKN J&*)-k{a]z{v@_?>A+|x)Ri#6J~6=6j~~oNQ"S8ӱhox`&Ex\Y5J֜Џ]Uմq SSsP.jFV R=6I}$n2F\%lF._$Y8^#*(HrC4#%t(*(BAqf.PPtTN((6*7eP Qho;(Â\`Pl |72UBf^7ɑlz#ԕyPM_\]qj_1^*.Ϲ;s׽P>Ƨ9[|4M✮wW{'_;Ba P]d%X4*ڡ⒵K:a ОCq .7Y¬_; AFANcUJ9ǯ"KAV3I!Wc(siAD!7QTDųue+YlMhޞQFԈYxk#ڈw nˣڂw@Eٌ(#R훣xC[6!w̌K\+,[sBu-`]E):ײwu[3 ֳFI)YsA+JmIh$F-橇OOb6dopR.Kgu"s L> dL@J:q%Y=Sw~zsRyj婕VZyy'd'BƝZEfǴ9y+B0ỲL(hfK?uC!0\J?U7%G(eq`Aݾ]m&j8܂Je 1d)S$E: w;NAUL<%NF Q6o\] .~jpjpjpayeaz'!=8y0 9%H1gDΤ9P&}0Ń,G܄/-j(ls JG@RP ,)Aly<^eʐNaNc*{H!S2]½dGge sto 0deQ]2\O'5P 5P 5ʰ(Ìff,UGg # 9+i>jkښBJw7_"ͷ3-Qeԛos RV@ )BIx;/޾cCUkůg|gIݔbo7v}~לٗƾKN O=?+{JW6G֏5;WIfꉿfwUIP( JB%b%<尽|Z zZ "0qG,}aK`wmlQ4X!`c`ZedfVXR#!4#9'H [ ڳ qmHY ֻ#h{AFk RM?.Lv`I>i5I "K,^?T٭KuDʆ[EVR !JAu#b!:e-XhY/L)Ky5Sf CB`9k2;8ƇDqZbvN浏JMYCGA~ ˁ<S`_ami<BSRTPmq  <1x+TPV[EPY[Pd40LQ5.Su/߾>qqq%]ZB.ۣ h lyhB Lt-:mAApTξu]sP:IWXTs&HFdʘTȫXPpb2NeA WWוUTYPeA:(a>F"d/$AޤC ,{Ǚ/jY*G4MuNu;Ћvo9P ?x;Zka.sEFfog/ʏqoRA*T;PC.K.+ȼAz, Ёc,d4KvQ5.KqslKԸDKԸ.B :XFT 512D MQw0c( 4$m'P] )AܪGH^$j[Ze7Q[ޢ|mjKRL+w, ^'$gtٻ|L2% eǨO49A$^zvzCJP0L 0N|C 0O 0O /3,lB!?NKOv:dˊZA À.^#IWԧ -^ $e'}FyT8>u•(N|e(Ur!(I=ar<ń1$5qa[hjDj&3bQ)kDY]]?RJY+eu}uAY6ؠYw ikȥ%%fEf)ۄ(GA[i}ں S[=A˞G#%?W7jYNڠbz>[Ga2&'Etr2Ib(?rRb8<ѐV2%9 gVκSY:˚_9k嬕VκF:, 6YZXYs@w94bH2[ ɩHj8]; ?UչcZ !gn3BKY ׍Oh\]mqQ.0W!j%XsJ3@LL&h5O7RP\`F0~lQ 5Kwg5P 5?59(b<𵓲07[4]^dfbEdl-ӈQX<8b~;Sس3H8 $opE"*8!H\ :KH9馩(B.X|#T*Z.QQxO^W*ZhuV;s\_!̶g٥ )lC67*a.Tmu ANT1+fl^gO&nO0?l/y^%s^Q֯13&rRZU\{k% 4͵ON/ G֙HDiˢm*p^0mHH`U%Hiއ)*{7&J+{2idU Y`o>/F PK՚\ޙIJ]Ep[f0P6ofp$)_/q5Q*{m}d(' i tsʓ{aHp1g@MťV\IΛg?ƹƤpL΋WK,eJ +$}p\ OI{7gM8i+zƉxu1̽iC+03öv$HHYku6]T2eĎ?}}67JY)~AS1ZKY~{4+f)je_Y hs$7WV7ɹz-&񈢬Qǃ*8J<翽|q/GEgWby|[y{+XG޾z}CZEH>:lb3cKd f\Ҝ[Uۣ~Q}b=xK+rƤ~y{~eЃh֓mLxmfdoC5;)D㋿/xu|B7Ox߬A$c7+M?~x`?/h1/_c $zGdYкŃ~V]LLws<#Moz:?ϙ8OZN)ع"Ǚ[BZA![ `ުnORJTdɔTcx|hkȀESedܔYfW}mq d/Krٺ7@$Iz.t xJ dL\a{ -OpoI6USC0%E9^ (#}[7*XAuL{4Xg|>C0lɅ4XV80nP/[+&_-{Ap +m^-u8X6DY-;;f`B@2h]˜eoHrMӿ^ݔ+u2\DRDk4ڍ/ |`[XBk\ِ# 㓣:v=mm&KOՒ쳅\bf>*F`k~Cg>qq"{P fȴkJ]Hx;WxRJWjG`mx-ڍ.3tA&{i=#b=F}9 N%ǛP㌶POKe~)Ʌ܋v "ʔ!a;XK͍1>a{&"L[Ț勇M9S`A}.鉞;VWx)T yb /S@LJsYbbxE$qGZCm9(6";;mjxMIlR\je4ecZ;1!F~jõmq9>[(P687>(`ƣ攧BtxN?/'>}ĸA%:Kb>^$b4<{ > ' o⌄0 %VLj^N?xFf tW젣/B Ȥqt26"ǥ}kI4Ws\ْ2[LeBpDc㌟;,ȲOթnB\YIY|$v(ǀ96ėt3S`.1[k60G)m`kL&- Üu3ެz瀉h$zk`N&n^ F՞N, nxV8JD#g aF $j[E!%//#KaƍGu$ p)ü㒅ltZr43gj! nܺS Y yXp"4.dN"]ց="KbL /ƥt4JIUk5@H0ދQܠ&b6 6j\[3pdDP->ɇl/(Gw3d@YOGa 2lFh6a$&_;7(E%V2rS͏9;{S~ΡuSZzN3)5SɾW0:k͢ X59k+ #It؛Gż $\gu%rgh|qr#>=b~闛I`d DC S 61LWD5N&8qO9i-SMXɗ«`=x].+/?Oѐ÷sO\޵~'QGE;)wW =A d}:oj^}ɮfjqH<{ &N9TWcKXfLEo=qהuekw([ 21E:'v*ROo~yXqԒq9S!X1:qcILVS+ c^cł2kŠYv՞a9c—Mwߡ9Y (Ejoگx'FRw ӫݫGzEAȔ, &H0ԀUdH%:;Gg9rhRa;4k+ H+oB{_5gi{T_K6b,;? $ bAj/Gƞ5Gcd쑂o$6EglFXE1<3.#;s:!G:Ũe 6isG%&t$5d3E;ƹ j W+Q)!$׸jt富 ~38lwR%S9pDxo[}cDs[hEF/U>곻G]\3+cc4ºlʝm%n@s3ghOd)ocA(þZQC(l..->lyo4zsb5sH̱1j\iwR[p v;/qM0XB4dv"#C^k L <ٞgq phуy )3b*ոޙAŘ<.ieY&T,'i qf8o.5x-7[<~Dǰg^1C? @] ڝ#sI sh{9e+f#m}{uID)#9U[mMET׹Enq7/f.Ό& ƆlL%^B\R_a}Ɗ.K{YG=^{{pcgSO j2%1һ6o>Sp;>&_ APXQJg_*9.*) MWEk3a6e4/hB*I~sӅF-L'?!iSY&yδFQtVLyIעJ+ kNrm/J05v/0nkB*g` Wm5a[̳{kVF# ^!YpF &HkD ŸhQB4a/W M$ڳ4u4Gg=UӻwCS\=r~ss?ՒwRMLf;U.Gzc^6fk ˗?C^0D fErq/W5.^.lmWF䵖L =GiVXX9y+tgHUY֢k.B6X0r螜M<ʍTF×{{zGmc)7jܡ2t7]ojtg.QHNY&ҤSjq(sNuh^]@C|)~.Z9#2(N(Lmi &̌6my>_,M;m<'z5&CS۫sfNHD7n!Ƶ6.I?8mћ`<*fX3n4aJ;;?hu=:. i9q& .gZ{۸_Ev%0'-6m ;E8$'V+K^]F{ݲ˱<4P9<ϹA.7$3\@Z4ȿmIqdkFA9K挼e0%DlU4Uᷮ'y3d]?_Y<R3X{4@HF>\Ofs¥H)'76| ; ;pqGp~d+Ru  &Ch "3i\N`}:8GWm؛Fk:ooO8c?<u~oX@.^t`8 n?7*3Qg)Ww}ңxq5"E=ӽ?:/_ɵq ,}]yH:RSƽ>urf|܌:3J4/'Y07ܘtٟ7@ o'QYܢ #??~%?en-c3v-2NV a10-~GAVk4x~n<fz137^wso?^^w~ǂ wxqkxs!i]#z?6hγ ͍ɺ=XAO`>n盛{NG?:N-4_ w3K/^oqg#kz Ʀ^ 4 l|oзhsa *~tx=._ܼr?[?M?D :e0'70vT&_ : [| ?2Gݿf-,:׋&9}.0*X4Vx&e#o;a՟ R+V0c@,]P\cmcVoo>m.1YRl D#3Ѐf ϥ;H2Jg"G~Hc%W{JQWR~HYL 2l0o02՗R)o:F#Gp;`b%7GU[k2;Xjm IJ )ȻQ =^.~jWyO5Iה]\mELI ;$H .lIwP9<XA9= T -bvJWqR`DŽ@Dڛ\A!@QC^ 0#>wK`]edk,q)&h,?P:MN Dy:F8;)"4`^&czs*4x6+h[9s.6x֥%%`^@Jn-=u.8_x;~ldaSx+k)S?,coGoqw̞wmoR|A'sJn F+ۆtv΃=d"Ge\wdo|E( XpL>7?xvy~[*v`=h熠O:Hn[`5F5Wx`;QM;&SgsS?ˮ w~v_>г~5<3>f:8369 `?ͿٵZ9w !{C(\rQF; p=3,=oߞγS~t ֹ X̠q^t ^}= >/E)tU{uq| 'Aߍ:5Cxk0ϯ4;pKO\N"3^p59x%/.e+2~ k֚|w{z?=soN?}@%ȳ^S'l)b juBin8YC͑=y N1Wq?1mCnՁfV=3qfÝيCʬzxufUYu@UYgVՙUufս}{ROXKdܘ} U>B1"\úqW⦉V$ Vd<3D[`nҚ,Ͼ+zE낫;.zp[v [vn-P//Rt_ !ldW;I~R vƃ4ib;Wao " K ]Z>|4,qcѤ >ѰRa6X>AGS ̢9OxJLuY'^HTq DCGCHIY=6s*5@ @bWF|q8w廸F [ N][Щkcȭ"Vr_-Ʉ^ m~1=^&ʓ34fQbwَyQ$L7+?]7cLdVWW94\c{X}"c}o]QFG1E+O'[ IH r,Z~Of AIke3ڢ\2BM;-8"Jojy ?|5kkkn2/jH%:݊&N5&4^j}"b) -FFلR~I!D1+dC y@HATHm5c-s/Un^ЗR6Rb)^cdv>r`E1W0$rJ,cg!\_0OWTb/?U[UY$Q_XaR e7WH/ `l`$*%"YSycL ThF&P]V>IiX DSF?YԾT9"S+DWq\ݓE4р$Fĕ!bfkYe8)+9>]";p-.L辌T:'xoJkP@Z3DOx$MTAIJ)<}{И&59{81iH1Z77Y3V7+o^%͹1;r/FI͐  6f&1R>2(HQQEIlxs" +*Xd{i}& !c *Δa}~ww:7/SS[Y`zLa_Ζb t# \SVu ؛-:x;jwXI0Q 8D ΋U^])䊌d1Oz!a 6%Iƅ o_4#n;4xDZm-. ި+jJ8S-IX:DL\~aI6J|9QAR2aUs P+V 6&G0 lr'wb+ 8 l$¡g)$8;%& L'¹`py@Y.HpU9GЯ"Dá;p" uHʥ*R+ƎZ&`WX:LSNj122L"Sg<\Y!PFx:utXc|., Zu?!N\ TMMx!F5$1Ah@IjXsذDƌXAD *I.0wNąuN#X>n d]g'WhIE(^8@`$,nՇy?N)ǤD̊_Qee:'f*J/I(;>V`UB震^{pK$$g!XhC4dDuZuF%E-ɼ'0D,|.4Y(E ##J+$B]J֤4)S B֖I,hdL8ͬ>'ʌpX $!J$jsz\R=b [0Ήb@+"<#Ն)z`"^v=&ݺD6k嘚LaN4r#G'pNaeYk Fd e_r-J*1UdʐB)ksQ`OLKn5.L&HhTF0(^N[B^pIsoYQDiYJ;DQqL9˅9d&G`9淦d_ 8dYBƃL3*`)ϭ .\ ՞%NZ$R_ `` {e4 T x@V |TUYAD6׊/q=$?eZٌ=\@cbL@ւY9EiG[L%wV" EW )W n!glF,V2&ÙrmP:B XP4q(%TRX6*N63w#1k Mr.S>}&xzj6{'- QDً#~i/ P9X{ĮoN-[ӓD?9>SXn;M-ܸ>;Zb)8&3˩rOkԊ6Ȑ\mmB1bC[aXw+F`i6Pc&q75t_I>?iZzpc0ɝf n9,K&ZUQ+?tݼƅχj8*ogOuY]ӛ|n&AXNJ齵nn6 Q9]R}.>Któ*+W*;o#O+3*)1ZP1 3Ý4q06L3&wns fܙTɗm\N,4#`MJL.<#^ uV o6bg0%{,4!J0,iz/fq@-^9fD\c$P=ǭP'[A". C:(,`%ÅlArty H˧Ǥܑ4FpvɽVJty 8Dy٫"I4N3K"QβTRS0*#~\ "@ "1ST`QZf(I/.<7ae#80t;kF{B|8=t:46f0ЯI>tmw_ /?G,TNݛ˓L™~|1g,4̢z ݲy)eeTvً.S@ih gtLlUF {&V\uQLFŚbFؑ)rֹX&}D)嬾Zj}Nޛ@+   ;/T@jb5 iTgN FrP a-5'SU8r R*໥?ZTbF5dmסݩqv*2-xS\<zfj75#JTDx%yŭzoH1+FxRgTJ4=z]rUZA-T"?]w;򝰷 (dpLL/A、7Ms:,U`؂i6^ao ًm䔇Bv+0o){mp>fi3rO oO߸µDC$:2E ǡ:Snt@VѩFv@K¸Piڭ EL̎,k7B~A FtQEpƬ5Vڭ E$Sјu%AZnO-Q_"&4q"VZc L"[Pa&8flT@n "̇g;m6;𩢃^ߘ/>3Q狶"#;ϻ\FB鎷\~fQ /(*] Piu(".LIpN*".%9d`m(K(H e0jy%F\Aw\pWs-Frk:@ <>HuW`&\Hp !ul P}'D.1 s \SE%PJ U sYGJU;>`bhVܽ 9n#b X|`5C\E#aXGvDw ͎ls'Z6 W7ֲbbJۅ ɜNPSLtVGiB*8(cn*Bؽ=mY3Y2>H**In\J v`!y5cb;D=MB%0e:qs` a#YkʟLX:VY`6BnAKR#L5* -V J`X Q6Ts12^R]K.RHYq T)AH/+Y3B0Zn4#*t@"IڏVj_Lb66ukυ6H Bj>;n (nJmĦ7*A1gBrtnvWYVVĝ5kRh꓄P'I([a;Y‚XڜߝjI m/ $p82>6B|l\; B_r[ÐfV}q ސe\Y 8mPUJVib}; {}K.{h($ GشF͐ H!c !Vsu {7X#d1q.44d*dHpJSRj89~Lu8r8?Ժ8mZӢմo? ~MH7.dѣj7".1S*pz)i.ڭ EL\⼣V(kb\[J\Ebhtw2bR_-WE@:D*;#xVުC#p E0od J,L ~./-x42U> (b>:Zxڽc}Вq@c^L-b{*O)FS5V7uI~n s@id.aB֒%{|O({ *at[ߑ:#QT\PzO1`o~:B%G&VP[c\裡jS5*ܒ ݢJ;ڏ(/:7ѐQ2hW;23T#j{#?.q'mnU*dUAMoja3-7@4Bj;EU@bzBא3`JW!I^HSp N%DP RJcsb( kb(v?]SM;R]7T3ȎEk&k+AhDlZ{%U*42pv2D:TsM䡷+O[9A*S8G&QkQ)*p3׶b37 !߸֒)͏~8*!n51mTn$Ljj璋n H7.dj7F".1S*퀫4ִ[j&$ђLE&D4҈4 YWЈ6* #YcLq|6.N~l6c"=%:;lھtf鸳^"ڡKg}r|T&g#Nj o ^Iֽt} [!kViHS1YB9s{6$(ƚ>u`k9IY G |%q1?woVYa SG |UE1"TT& n!RA5D#}мY ;^e翫q\O[Ht"R坷XXLIRa&cG0SD$ 笀RK~gZk )A03x#c8j Z @hg^04iƧ"j'qSTfoA|JOI}TQ\wsP\b8H}Ü&̦(ebIZC{B|ALePwuPIbC hFb'SHByV F:*Δa[X厡\ؖTd"k./w|^a(Ňu7SzUTBty U3 3ub[ :j1`Ɣ%ёSRȖ1 uX"˒vp%ハFY!FI,)Xև%M*.I!FIK2|)QLDQ\X]vՌ%:)!hzI(f4_ : .h)@huV'U]76Yt!vo=+DGQ;[,.B-9u僝}i;(X,njoLǰ7 'fy`^RxDrF 0 C lTycSlƭI%,JQE攃Q,g1jtQِ~2$N_v`!!7b]>T8>]" lI$4|1ޜ8< _BbVHg ߵЮВ޿!UA2XL#Me<#,u8C)5\"D%'aT0N{U@#\P[ܗjCO1;e,X5c=M҅ %>Լ;RufH*i W 0߮3#_ CE% Y g۫QMz˭ny2C>nX^穃,Hf޹a24(h^4av~e4IֺO'yV(9=-ymb'2(÷0[Ѝ?اl/wfӇB z`hqY7_gK@84|Aͥx7+u_Ϯ~ ߽:h$?>Mf@ϫ[L‹O^ o-LS 䴷| ՟p4ݟ{e?ʄӿ.g/Ns-BOoCC>>LiV&$쭸pldNFc&3${`Շ1`iSPZ|; =MU|A c|҉M݅/eRڔ}cQ߻c:?=w0kiolY4A#? ރ{XϲR`DҋO$ AP遅GO&Po~?x<tL^͇/!%A?FçѬsƍ%B/Pt殳: M]MS39_'8+kO4z?-NGc( l&ףI?fgϏ[|~O~+,07p,g/f0HL0 )3|S&(1* >XᗋrY EO-Q607w֘ El-{q$O@QmL\ďv1|:)xy -aӗeOx?d'{ye^x"y[F/(~dz<˭/_fBYWHװ# J2IHkX_!C$!R2/Ly "Lv#K{) (޻|OR! J)`_8Tkǭf;a7F'`D2KR+xpx'`9T ӏDn$s3'-JwX ͣE.vf7 4-G˚pnChy!h}mgZڽ[RhsyN1>n(5cUn#C^y%ٹ( N|X2@ ƤesnINnOXTı(,rtiAvҘ>.[}=$0s/'3j0YcޚuYsN[2'tg~OF_ko>Rm}Q4е6е6е6еm(PFP/&m|BQR9خtMM11"tMM`B6=hS lJaO h!2,奰 Z"G"FLD,0dzle [;>~[ x|x|kw|^#3k߹i(m:VNBT[IYΉj+$~T[þl2̛ёJ6f̗/[Yn kD<`mOF5SyM }tj岀(6 GŒpED'uac6`WtC)v(}HAVjl(=GbR5J=@(}HAVjd;=ooLUѴmP6v'L`zi[JGb -fXB=HMmgb)wC)J=%J)wC {C1OZ$HCRRSAҳF)n(EPp"R#57J1rC)ctDlJ1rCcAZ5J sC)h(}HAVjLlQz( 1$!qitK=HZk׎cq#P JԘv%QJQc"R#57JtC)6ŰslIq陣:K ҌXJǥ4Q׃H.nxաZ+ђcO 4xN.yI")+?_7j `@E}(@jm #EWQW6N\,Ś!(!彾n$_!Zt|x35[N–K$$.V=%q,.9^ :SpV.V)r |{=)ᙵci'aTm=ɶ9 hV@1.)(IaR+Qw+#)[`j|7M&uRbKdSD 9F- ~'&:|$#@Y1BwD= cC48^,c֢MFCǒoO.Rt #~WH sp᠛yXL[uR]޲-JyN@T&MG%7-'6s[ݻWtw!>` & H9H@Cdz|]R a0V)_pkNd'naG$^V®d~#ox& cʭp)mK4aԃXXi-JFH##2fi(qƈAX}5%nJcOJؖ?g"14J)hݒ9Jz@# aG )<P Dz,Wi#(!g2^3x]γ`:bV? ػ-^ QL0{A@=<$qLG}N# a0 .t 1 `yh ᖷyp-Fa\& =Q: Eutz"( <K2/$Bu޾z49A HƒWY1Ѱ#NF=_q-h懊#nFRjĤ06 A{a‰=i\kBWT 'UoMj* !{RP@qB·*(̴7aB+ذW/ƥqvʐKV-ָi{u8^*:jQ% 4dKԟ]@D^{w;h-c2FRb.̞8F>M8Fe]T:m<`QC6Ÿj 9GCbBl?I8NT#FX@V5PCe9NȅOSm0Q!ܻ`*= a(8#>Cɂ80DƾD/L57FT8aW!Pͪ^m.:;}ac(dBф[! 々1%!!gHFrʰ,fI2?21VշlH? ǖ/*g8t-bv!Խ7R)(?֙7L=J y\QJ2s?2W-KoeͧߢplMD0ZjƫCh|}S fqU9SS?}ҷ7o>+-͆*Dap|۩chu7~1,5NO'EBkmn3^FVoY.Mve|C_s[&t0K2WL/f}凅H aqҿThxO$_LSVxwF#fvlt⤼'ki޷#e_O"ez,j}Ù띣]5j8T[]j>7}D>[! ZGL,OΒ.e"ӈ;P\zz2{~iUeir $m\˦+$GL>F ژMuT"=HtK_r=0~aE9;mǸTӥ%uk@%w8? F9PG hU"B͠tx%ps*&,zQ_$'&zz(hoR9cWxVٰ;QjӖQꎧ>(RXf nIbsʊzܽ&X5DKv=MJq,Q!a/YҭJ%n,~Cl+IXXǕYAkn1JQ0b ka9s_^ pI9Tyt{wsǐ|[vXNr(lXn|N2frJ d( 愰#5 /Bħ^%#2(]'[t^p JH.DThўK~OYLpF"[R}Po&{ɊIO szz/܍„P}84/AA|g7~yY E߱뇇hn.Y|ј)0ͽX'dhqb[8rSP^3n.>mn|D|@9y-)vt5! t#Ot;݄ETڢ[Bѭ Z0Yv%vJ+:[Y6E܆r .9-eyK'h'e-Χm#@jp۟ VE\Fa( ktd0! I, .|#@e<0QZ0Bxz𷈐H㱸3Tg"1uoO^ mq؜Oq|kJ)'/$ĪSM!$ 'CX'd%;ocgL~^m}(j%wiGꃎپZ棡y͔kQ浆浆浆 l8PE^@gZƑ;dsIbq`3gb_gq,?RjjMZkiXc<#0FH\$FFFF 8~)/ EQ_mf }bdd,Z}AZikp cr.8Q&$9EԂ4MbU#-S('p-qFR0Hm90p?1'zH3"HE 8EN;#w3;B]욮BB}nJq N/1c>lDD\C0C~d,8+ۇO q`OJKKJ2y8y8y8y=nAL^⨽D#)(Q5'r=u|'LdO&sx2 }vnL\>&P s%U521tPDMW&}- s+TӕIW&^ZIL&q+G|N֍dW&ne8P33L{o˂91U8\NM]2 9LvQ{o$:-|B 1Ic )MLh\:Jx*m%X49K(d$#JfBs"TN{<齾,Hٽ;ρ ǼvE 2D'Ba,"ipT'Zj)I$\P8'uHHJXܗѹq۹~_>E]̲z,}(+k |'\+n_'8v )ˊoխSBO2_l=٧ߚ~ɇr;ksn؍2ߞ])im.{kj %`CQ9֞%3#'5C~me'<+K5)+iw蚽MX;*[a%@" 6Vuэ'YߞOvN.YK:jEL,\}6@H'SXF{ (EPuܓulxLDF0eQ7c ;~-j#87&}zzx- PuQbV'}1f5"SmZ ڶ{7omriAkKJtE?V cVa`ջƒƒz+Rꂡ=~Rm06WJwuTwqJ(K׬mu/3uɎ{JiaV '?o*Ysq*6owW_=}|rq]~c'?'"%|0#%Voiuk=''t+Q _m3)X'``^CzgJwvٵT6]b^TLY1rߖ ݟd,9ߞ a as*GTͷE,텍' %Qx,ԱLy'"9K@cQ7$2r.D. "Jh8Alէ0V'eqZsonA{:=pHՃ\i;& 8( R-<yBQl t)NӌYҞɌP"v Ǎ\/f23J-T u76 ,]vfU?$71<|]i w -ӛXCTfo&&>QvET:ZN}uf+6p~[%&CA+hnQhP@'>p) L:j.!)AFo ok7I[(\v@nnun] C5R'S2PX8"u=pfeQd:H8E3.r-DR$"gԠ.2B3o7Eq&bU?]‘b \)=TOdA_-6Fґr$Eh5IG(i0P4Yđif#Q5$Hd b{IxFr]s~ ߕ{  .EDԆ jBHRB4zŒF0VRtKp)ȕzS҈GB 6>T敺.0=`[wa9 ·Hx] ffrf`5:6t(Gz՜3).ԝv k\a.x׎3EwǗ[ݹ.]FwftLN nȭ#ke܊{wc,Fê{nNvc KT;|UL- .!)ߩңA tBg} [>sh4GB^9D0--F[(\v@/ȴr޷v v!BzHJ# q 'H@ dsJ)aE^@6^ewvO7WQHxFF_J&tͣfב֞TK6F>,fhAauvW>|]noPsb 7elfz]VlsD,Y=PyV=OYY3{)hi) G,'ghFR$ISBO4N3y5Ha+jkb?\J-g$s2hTvCMw'TMnOHxGq ?{ע v6SS r>T;3J c_?V1{'΁kMB8uABP >0yRJ+DR"R:Q^Brt*wO:R^ tN>Ȏ$} 7wߪRA,bc v8C(vs$e0`o%v} hQ: >U՗|Hqnj7H_Hs~.oO2:81+64g,14:e9ei;mo"EHR0p!/;h;^Pņ EĻp"S / FW}|)'5SZ$ih+$1#Q)̍39qJ.\G;|$4ac LZ4j#8 䎚L#p P*oYGGRpʑj\ϥZC`y|.,9mG'lm|hQX3GSki%N Ֆ(w`眶vkSzG}ܽ!Θl9+iov09'#xq=ͱ:6JFid b=e[ Op2%˔WfL}ӼKMf.\΄1eqgДDƌ*HEF0O4@ 4Li"w=gcy֬jrChT=ڰ4Cd&]Z~$Z|086 RsKAc[rjrZɵ3N XI'-:qPQ}:OqGBN1J1D2< U(֜g$1aLet~mGUn8SEB3Dd2u҂JH4rwxrEk=y̬1Xߙ7SS#zNi)83;ݸKn7r. [U Ӑh9&(mC"U[XK %FrAԀ.T{mji@fU k={͔b3񒨒P}— ` [1Gs LCv#nҍO~RphFȜ"gt:ldn uX99zvcJ׫QyxN, Ts#̖TCZK`ЎTMOL+im}({ wWLz{vcuV ëu.#SVig{Y C4SmkmmVBywhg 8nюڭ rƔRۑeΚ5qyBcsW.˸r+dPO<=(Ǿ'j\x7{2HX+N'ދPmd9ԣ/FsL2Uglc_~Y,7eS{s{vWwHmiY(>[Iݍ\xBb}nJ]tjymj|Ư#,=q{}"X?|>9Z=.8)VL)ӯ&jw}T/TFPz( .(㬼ePejNkNJ nE8K֣@>uI7J7|(GN51F)72(rMGKQSNq;HJ_R-CcF)# gU3BQ:ՊM>ԩC18"d(҂jܛPz(ER,KQJ ']z(R^ڟǀRn(-V\M(=bqҥv\q^ưzYfJTUϐQt:Ոd::n"Rb(EpCiA5N PʬQJ *7fJ;>u?f‚= bON5*1ǍR&8RPz( %ңFk$ G 5l,?{SsV8X}eJ^o}>T3f%O `)操tWپJj)>ʮ˺GO,r}U?$R+mYEYXr?&dn,3Yhj(Y[%N'@"U_wuUcyl'PՎI@Y) ;ARږNv'1"$2* ,AɈna\Pv68< Vj&Y^&^Pͼ{NA%MdӠ0BHge].>ةZôZ{q"4LTR0NDHib':pbZQl[h$~:4VCV̅w'ۙw!Q]QlsFΈ2恦!aaB)X G8L &!A\Ebb_qUS4Mi0M⧱IYb}1ٛ79aSIUjYnf|}McW/R~ͼəlӹg7 iit餃ջf]zC؊zӡ ɦ/fЃ]yclPysY-p1휼zwAieg:;Hޒ8bA| 'RzN z.If~O'Ɨߵ?;5OZi18Ϗg(~= e~po/qp<|Z|p G_1|}k9?}3*Jsyُ)x~^هΏg\4Q{A AAzп%AbNkl_w0;TGe i~-Izm .v8nalTg+=Q`9N tUa/~?bϊgG%Wib{oW {RQy:|lOslZfO޳ @oKqߞ ,&JooߞŰ?z{?{Pf^4qNt0ȎcS5<|~!,p81=9LUvgţ'QS>{<ˁQM7J?'1 Fp]}6=_ cOMb g_"S|O ^Ʌ.[ʰ7 2m ,Λ/| -QyLү[Wx./2 LJyUv]lT~-fc1~`;=[&>H>V[Ecl QsAi., 6"tryh|O<{䡓oޅ\#ʝ)!?sGEn0~FQ$n<+\͠lTbZbr*?fRY?wUJnnnnyܺnFyc aUՄw3ڨc bԌPo=n cGq^Wz7$!nnn me$[bubeǰw Nwx7wx7wx7wͳ-p7nFJ"JaQ6QkyΆlBԻy&1hI'j`qԢִ,.LӄlD `q)D(*a±8ckyp,, + \~Yֺ,[:+"a'X`VM&1nZ}}}xP[*=>&co{]WJ$CCz-W+m7V-4;dJL aL{b0 'qt/[qG\1uEI*YISyٲ*.<ުe{[ +A}^'*,>jt)uPѥz ) .(Ţ9E@*!aHKq%C@$".%ᎈ.Wc rwFb| r8\ѥ\`j)2BDbmdR'alA@"HBG17aYbD"QU$y[KZ'W#ygf~߅\6$᱒D4BK=Pnpm F,ቁ*J6\<_"SHb" 6pHPGHEdz$>2J$pQk $8HMB?pHPv45@#K?!uq τ 9 a m=ʼnkB@h,mDo#zۈF6mĭvn%6bmD>/,N&1HZƅ+eﬕ|3nb&(a5,ПB}Hrfu2\ْ)sL@" cA_TO4Paӟq75K0wah:NO۷ ሗ ]feCEDeׄMw!GSYcJQV ɶܙMp`tD(*`#bqa8!8c$U{ a^)) !,r\~lk7~aŽYx5qNV<#I%VH(aV#l]sޛj=0"ՔP]8m^3<~ԃF=ty.Ƣԩ84ԛ81~['BPoBj\xvֶ "۹&P+[#|KZwO# gc1:vf_ؙ}agEyf7r@ & 8 1<1Wa LQed',Bi~{{/᱉ou`/xv}P.#l:)Zv}=5lJ7L̜{-=ݎ6v;-R>< "i~ڙẅ́B۲6Eys_[-ҊF׿H*dfAE.Pra z6j ǣg; [q:^F[#CQ1Fx13DXp5׳v Oѧ/.w8-> P<ȳ5y>Y^kv1I)Ia2*)!63dW)v4҈J9SCTM!:UҘ`nZCWY<򳽼:~3SNQ"TS)I#P/׎<w&4İ6ҷ/Ub_}ǻI2]ҥb*hL(BcPU&Qp"DaK|S8XDp`2L}Q_z&į8q,lx = ٫شd93vTZ٤c{DŽ9/,/,/,//'6[)cf$HrAT'#3 HB$YmߐD0 BβYc@־<>ϫ0 f_RԒ4iWweYu6,"6|͇v)1øq_A8b݃iIq_wS|W2}/UUT~(5u~rw j&R?ap_QֽHpQҮ$"%k?X8``2#;%:ϗS+<@QaL HPL*4N`$"Γ',`C5" |Y5q #f>+`G(p֢E0qjMˁPa+8Vgju{?C3 @iֲP0o%Jf(kerwhYo@[&2ieB2񖉷L-*ge 29dlu J_8tDTeBo@g%2 ʽ/+iLw5\("8h,G e%M0JRh$҉6aeDVZv#њ<W1ongQz 'ae,P dԾӮ8M$OC2R":QR`_ gq$R$RIHBE U6 "=(C{`7.:\Ĕ"c%PD⁈0$ !S$@ 2 [\Ď\Ķ1BoJ&iyvD7v[RsC)8WYDݫ[a%1-@/8'/i(b&&;>M;\YCɴMmM0_E]"`jLdSA۵i#Eְ``mDShߎt9-%k{M%X*,5O?#ۼpFVȺ:"^C.Lɳv ^ f9 7#I5\ǡ)VE+sZ΢B+ۖǸ*eđ2z&Aبg8Vk'JTRR=u]F@5(y@%^׵JddUl~gj-^£ ; 0+ ]z ld)r圶^kr,_o56/0MGH)Lwz"{Jjuo@l (LQklSh dkl㛳]\6XfrAY+-slYv4 aF;4a%PX'16&ns1طf2Osȴ ޵߸A 9oe'ͅZou rд1<̅h'SדzqwsBdt5 G."]cg.Tj۱PnϏ;5^كt6H 90יx>,Ƥ dl AK'ZHA3A*"`\kqFn+`ؾ~Ȯ9&mO۲Xvא/oCYMk➀C$/ֻ%cXB6m_gv_>^\/c/r|_?_ܢ=@Ci\ӣs}}D_ 6N|nt.n04h=7ń T  YhX,)=gu+.X6Zt68ۂ¥jf 6}]{w?Ց-|ϗ no.}qbfqWUDEE]xya_>2@%/W.0#N;YBdԵigGG]@d9Uju_cY5֫vI톲*."7\4Xr<8F_rz$떺Vܗ@M$fNA67Y(J]H2;RDQ9z5_-hAtx3FLf;pp[Wi/ A<` R-AÃxi1iã1R8i <]iG1Ov2z bpT(I x\(?CPQ4p`fРp=E+`IE6 `dNe}V*("VٰL/*<]!^_7!-a%Yp=]0E9Sr–[:5 -LeeE`veeS JZXHdjyH%'})3%kKTum Y •'ԋ> PjZ!N]cnM;]f je#9=p"5g:K˘ 7\Q+LHP(K3$s+t+؝yӎ[#!$uBȘ=s:Ғs LǐopbΎo6l.n6aBխ299#}BtRt plY\c=GR9 zI=TϨPgni+w^F>Gd^9gF4'7Hf7>9Nբ(0%^ %Td,**1W{hA+ɯL(˜ 忻 eLeYh`9VpK*U@ iԚq(-z[nCT0HWEW1JIAeF5@",I!Q1*u](\هI(Ǿܐ!L2 c}BfaeQPR(y@P3{kGMKWi[[l\Glj[_c1rC$־ƺ& خƵa:WŔ(M 65fN)h*[3sUwL65.K5. Zx%5m/E\a6B hiݛf%<Ҁ1GDcsVzCeon_y't~utJL= p %Kn<ّD̘S8cp4ؾm"Sz#1*j=&5SEdE*B& U{ m(Gns XڀaG /q gzvS-=8@P߃XC; GSߠaQzugH 㯖*\޿ .)UWN&qd2:'FH:P~0""˄` NrBVZq.In/ih7Q $5^` FIF*%Acn$w;rR!8%i=w1H 14oc7R-JĐI*TEi-a^BXBJ1,g#PdAA|bn.7xGbZ-N:{53ʃ;:Vжx}RDi\%/(/- dFARR)8|$G輳[2`}Xg J "OS.L.`^(hNt7l+#镨ؘjM#.M~RJ#Z0P')=j)e&NJA3 Lek߾}6cq/f_o͟|■6>CMR}RKS^_I-a%>Yҩa=O9LznL}-la`NdJ/;<K6;Ud%ݗ’ƌݩΓ^Y"I,i1O}t4}~l,hrhce꒍Lь[E,{*0컯һU7 iz .KΗ Y,nَ76eoWY3W"< }qssOLrtb~ j.Ga\@k#\B,n.W^/Szݺ Z3τ/+ 8u}xh_SNN9nfdWr<&5I_F;Dq0 {neϏMN&#ͼ~9r>;ZɺNmNP.=n)S?7gVCBm 8_{|\穗{_&R(Qy+$༭ )*) q!9.>V_ϝd3㙟T3|,H[e-$s$; B>#'1gÀ n# 4\$gܜԃ2J 4p0MU\eFle<ӠKmrv]URt5>!}z1[,+^*ݞ{[@Τ׏13@Z[V=7sK{DOJϾVWϥt!>ĽTGps [}r |0{eq/N j[i?ߞM̜bz&3[[|аτFj|qjs دy.RC!LJ"5BVy) GA\(lxqT.)k >LG!Хx\el=ҹw""g1MR ?:6hluH6X؟Ic L,Md']FAgq9>q:2~~zs1GVd)mTt.-n 9}E#BJOCt(# L,rZ ւJv,lb壍u)?'")9xO'eP3DsƢ(N!`d'jڈ]B^DÂ3\x >Z}*|d5q[z_ਉ ,82}J &(#Ju)JSDRes|,Ձ ]$SAX|%eۙQ-}@OɾMn x'ƺKy|@MJ%{{aMo\Q?PI ֒!Ypf27%h{l9p R&xmb1LIIQ "\bW؇0~`|2]\sJy#VClܩhgLQ)1bt =uGQrc~rY^ ̶vqdRyR9Tz 6F%) ex@qc?2vd&H^|h\j,{.HS22Zi$v_sk n'=K(䂰S"(Le)7 R>%6OVgk맽UEvVD]=IJ7U[@pq;p٠$r$/BPWLZ>9 # .K A-E#\a3? "9-lryBå"ҨPj%GY9[,%y$rv6]}\Jtdȗ -p`ځ௑IGnZZh:~cOo}KݹKŋxT)^*r-UfH.AF";-ݲ=`,Hfx|Pur7ZūwM-v7hZ- G)& \BD׊R՜DrcF45"z=#ԎS9B掛NÂ5>qAta*U} 5 /8M˧#o,P ?0^jz jP9})^4<`nc0Y1,Fݬ`:XzQx5W$q4[nMѽh8/ǖbJ횖hMh&ڇ{=zhG4Tj˭`%B3 feϓV֖0//ȷoG+ 1rlWC[].7H *4+v:,M -+JX9JL[cx!vCoQ*xz- 9+-}ӑiDdqgA%ͼR*6ٖpm#i^\6ZvwenVhU` Z|l-V =XӛOئF&QaA hJL)^5:Rn(+|zP]KYJM`^^A*[mRh UZ8;kSz6;6^wJ?ŝҒRv<9%ȪHU&q4lYꉻ &kLVʅ5י*Nyԛa }z.2z>Lyn30_FüDYm@|=b . ~eV*T&3k$ϖFk_s0\xN+㏷6ikRu1vz7 _ 40P'=i|zFp`KѸ",C# sR%6ElfBf~r+I`U4;%U{ L ;JÇu69SLԥnyjǾ0sPw,WKOJ1Wt* 9%PU oy79XӫRȺ׬UXЭ>^SFu^}z #wWXO'T/4=آ"] HΉ]/9XsgZ-/o:ozRAw;oZο,ӧo'zg"5UQ` h(8ēr9dQFd@cH^۔RveK1YW΂ άWInx+׋3IUVҸ蛹JMJ> 1Ag\ܻ \ [`(=Bf{J?X4iPIRj%T 8_L5Qf- !%db0lbמ^߯ \O*vw_RB۝}'5OwROΧ{^67?Srާ]4a^ )w_ =1yeg[vTnVQ- gmy{deD!d~IX|^_/[?lhEZ 5g|>:{r fj6yW=)~N ]ρܓZJ]97za3r #nlһBw Oh?Bh-ߘhwsbGp;>GTg?]+kn$G 9+ѱٱcf{f Y]1G8 )XB]u*_& dCsJ'poo7mtcF=hlL5]-Voٿ-6oo߲d(SmOmǑH|}xaȈx:^$+ Rz#jĚ}BkJE.[tPύn{u&lqɩ#ݥB12.lc~7[חS1 1?׷>-Q~(W>ldi|O[RH;οw\}w?rg6$\rqgtF}ocbuؘ53)_}E,M4ɦ}$ݒSxT BL'6attyb@և'0s߻ynNj1mU\[r&٦ koP\a7|箯\uӷ5nc$@4Kd pHaD`pJ bE-Pl ,_CW ~XզSzf~B{Y\|]n\Ҩ* ݷm:7|mn\WqY.|=zYxL7J*fa>K9{cRU缾3RפFoVzVV+Bc?5ItɖeV`0Q!\̢u&̀ Ll>y:(YCy%(B EB_wO7!RyS2L@й.Y:Pj"Doխ?h_4pD@YA>KAqDF[1*sK#=AZ|*d yTN伻9w<-K%% P+ˢdps%d9wabt%VdyVpfLw9,\lM;ѠJTj(EW#ҽ *IMm6t40ALU=%ig; n]'U0u'Tnօ' A* nhlj10 }5y.}|:#4ʍRXkDmJVRBq26*!-|qi&T$^<ޣ:RNӽǭ b<ԓJZ~ O`נ.\jNL u ց_FW$BOuЃ&/WF?as7ƔP"b -EbEn ǐ쨤ҌJ2AKZDa(BI*h(u,gHXhLd0Ha 5EmVPbF\1s?PTi~㴤]G P +-I^p4dbcdbrB1ZB!lF WD"\,?hLxh@W0{7#^% Cв )YA E J\H EVł4 V2˱934uE`d)|t Rd~jGA3 =] 5wog-?>\]w5Ԛ) SD#PN߲{CKRV8QJXIBTg+CPx8tu/aV|W:D1W'O,AQK\j=QJ 6ea{[)&hߐ#dOˀf6kH}_6Sm9BJHm&l<֞N;gZl֞("RI1% '.F(j \+8Y%9Lo؍@װd xFb(fVI׳F]jgБ'>9]jdXe;K '=&8pBK,~a \bnْ&[j!n [˖6!PT@4"A唅p tyAjj-W=ji% RMC*EpMIjmF #bn:}㝜 )kOƹvbՃm0QAQ/ȵUi:P\$èbS=ERA,Ar)K,%/ Y(Fs`0a(I4o;x AuC=3Kgg>~dEFA&5%JfR1JTQj@" jVG0)Yū@he 5T:3+5JSQpjJe`]C!L\rR,+9K" DsN=Zr[=Z$G݊ BXo(uN}%n \ X\KFm(wjǵ݉Ph@)ћf'RA(:`xΣ}ktG92tH },OF #O[} Jr>H`#8doK5,U5.,uk. c/8B;RbP0VVޅH!0xb :Qass;AXJ c4htldU]Uöwhklr15O!̂VW?l&GVkY6Of1zӨ[R6xi<-r%p`EPW1bA/^6j4 8P=q`0&$Щζ,E$?KzㇰkZaGT'H-~MDGLūd-qbCWv=l"Ԝ #9ߓYmbDFdөP G`!'nI6}zLNj1mTȔ^7$twsAbb:mx)Wjb z>,MM!kgj]C`!ohSHr8jgmÒ%BONk"$h.4*9Dx.PI.4lFI3\R*@N0N'LTJG;\Lw@(rWwbLCA͜Q V.[!peEnBǐׇϒP3*cl0[_T+J¬N1rd "ɡ/) N`$:"X+vԼs\;Eo)jE&:q/VD5t.t+9.np PX0P$7q4TP_T'J^Yiaa9 @~7ODJ_7Km N3( 0iCŖp#Kp.8euT h+܏ 5֜'CY0/J A({㐲R (1<ϴi192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 15:01:26 crc kubenswrapper[4620]: I0129 15:01:26.399170 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58986->192.168.126.11:17697: read: connection reset by peer" Jan 29 15:01:26 crc kubenswrapper[4620]: I0129 15:01:26.399524 4620 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 15:01:26 crc kubenswrapper[4620]: I0129 15:01:26.399557 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 15:01:26 crc kubenswrapper[4620]: I0129 15:01:26.814588 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 03:53:37.146072008 +0000 UTC Jan 29 15:01:27 crc kubenswrapper[4620]: I0129 15:01:27.360908 4620 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 14:56:26 +0000 UTC, rotation deadline is 2026-11-09 09:07:37.948071313 +0000 UTC Jan 29 15:01:27 crc kubenswrapper[4620]: I0129 15:01:27.361331 4620 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6810h6m10.586745538s for next certificate rotation Jan 29 15:01:27 crc kubenswrapper[4620]: I0129 15:01:27.814891 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 13:39:07.830990199 +0000 UTC Jan 29 15:01:27 crc kubenswrapper[4620]: I0129 15:01:27.834826 4620 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 15:01:28 crc kubenswrapper[4620]: I0129 15:01:28.058732 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:01:28 crc kubenswrapper[4620]: I0129 15:01:28.059212 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 15:01:28 crc kubenswrapper[4620]: I0129 15:01:28.060998 4620 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f" exitCode=255 Jan 29 15:01:28 crc kubenswrapper[4620]: I0129 15:01:28.061062 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f"} Jan 29 15:01:28 crc kubenswrapper[4620]: I0129 15:01:28.061137 4620 scope.go:117] "RemoveContainer" containerID="5950afd84f4b96b696908d785a9ce7f78fbb9c7c6b6cc995cc9ac450cd94c8a8" Jan 29 15:01:28 crc kubenswrapper[4620]: I0129 15:01:28.061343 4620 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:01:28 crc kubenswrapper[4620]: I0129 15:01:28.062340 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:28 crc kubenswrapper[4620]: I0129 15:01:28.062378 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:28 crc kubenswrapper[4620]: I0129 15:01:28.062390 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:28 crc kubenswrapper[4620]: I0129 15:01:28.063227 4620 scope.go:117] "RemoveContainer" containerID="aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f" Jan 29 15:01:28 crc kubenswrapper[4620]: E0129 15:01:28.063425 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 15:01:28 crc kubenswrapper[4620]: I0129 15:01:28.815258 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:47:55.245080745 +0000 UTC Jan 29 15:01:29 crc kubenswrapper[4620]: I0129 15:01:29.065939 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:01:29 crc kubenswrapper[4620]: I0129 15:01:29.789825 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:01:29 crc kubenswrapper[4620]: I0129 15:01:29.790015 4620 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:01:29 crc kubenswrapper[4620]: I0129 15:01:29.791037 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:29 crc kubenswrapper[4620]: I0129 15:01:29.791067 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:29 crc kubenswrapper[4620]: I0129 15:01:29.791078 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:29 crc kubenswrapper[4620]: I0129 15:01:29.791647 4620 scope.go:117] "RemoveContainer" containerID="aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f" Jan 29 15:01:29 crc kubenswrapper[4620]: E0129 15:01:29.791836 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 15:01:29 crc kubenswrapper[4620]: I0129 15:01:29.815782 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:39:06.33987757 +0000 UTC Jan 29 15:01:30 crc kubenswrapper[4620]: I0129 15:01:30.533391 4620 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 15:01:30 crc kubenswrapper[4620]: I0129 15:01:30.816334 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 05:54:46.36419657 +0000 UTC Jan 29 15:01:30 crc kubenswrapper[4620]: I0129 15:01:30.820645 4620 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 15:01:30 crc kubenswrapper[4620]: E0129 15:01:30.951505 4620 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 15:01:31 crc kubenswrapper[4620]: I0129 15:01:31.816915 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:32:50.825787319 +0000 UTC Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.315829 4620 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.352850 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.358348 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.797741 4620 apiserver.go:52] "Watching apiserver" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.802502 4620 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.803066 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-7469t","openshift-multus/multus-additional-cni-plugins-hpt9v","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-dns/node-resolver-kqpq8","openshift-image-registry/node-ca-ckzvr","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-multus/multus-tlwgt","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-ks4d9"] Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.803599 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.803711 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:32 crc kubenswrapper[4620]: E0129 15:01:32.803848 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.804099 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:32 crc kubenswrapper[4620]: E0129 15:01:32.804259 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.804341 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.804486 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.804806 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:32 crc kubenswrapper[4620]: E0129 15:01:32.804998 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.805363 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kqpq8" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.805508 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ckzvr" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.805731 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.806047 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.806368 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.806934 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.809220 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.809415 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.810362 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.810621 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.810927 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.811179 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.812584 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.813226 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.813663 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.814027 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.817998 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.820661 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.820866 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.821135 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.821429 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.821463 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.821687 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:46:03.397144193 +0000 UTC Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.821868 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.822032 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.822392 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.822481 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.822926 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.823431 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.824925 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.825252 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.825734 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.826165 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.826466 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.832299 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.832367 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.832723 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.832949 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.833047 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.833164 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.833274 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.833304 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.850918 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.864157 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.892695 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.902599 4620 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.906679 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.926858 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.939482 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.951838 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.953936 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954078 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954154 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954227 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954290 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954407 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954509 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954323 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954455 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954657 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954465 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954694 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954588 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954604 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954740 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954773 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954789 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954803 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954818 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954833 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954849 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954863 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954878 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954891 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954908 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954922 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954936 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954952 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954965 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954981 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.954996 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955010 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955056 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955072 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955087 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955122 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955153 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955171 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955189 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955205 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955221 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955235 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955253 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955267 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955284 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955297 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955312 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955326 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955340 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955355 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955370 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955384 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955399 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955413 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955427 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955442 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955457 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955472 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955486 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955500 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955514 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955528 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955542 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955557 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955580 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955596 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955610 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955625 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955639 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955672 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955687 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955702 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955716 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955731 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955747 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955776 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955791 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955841 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955792 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955863 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955884 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955905 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955925 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955939 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955954 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955970 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.955989 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956004 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956020 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956037 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956082 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956101 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956116 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956132 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956147 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956162 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956182 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956208 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956230 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956252 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956279 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956325 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956335 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956426 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956455 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956478 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956499 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956521 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956546 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956567 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956588 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956610 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956631 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956654 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956668 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956676 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956784 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956812 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956859 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956885 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956911 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956964 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.956987 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957029 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957052 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957076 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957119 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957230 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957287 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957318 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957364 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957390 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957433 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957458 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957483 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957532 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957557 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957579 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957622 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957645 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957688 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957714 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957736 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957791 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957813 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957859 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957883 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957906 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957959 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.957981 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958022 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958047 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958068 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958084 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958111 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958141 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958185 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958225 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958268 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958292 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958312 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958352 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958376 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958395 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958419 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958445 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958469 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958512 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958537 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958578 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958600 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958621 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958662 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958685 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958706 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958746 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958803 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958827 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958869 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958893 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958918 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958962 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.958985 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.959741 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.959768 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960073 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960125 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960152 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960182 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960210 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960236 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960259 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960285 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960310 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960335 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960358 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960384 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960411 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960434 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960459 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960483 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960506 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960529 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960553 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960578 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960602 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960942 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.960975 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.961000 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.961068 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-env-overrides\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.971407 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.971686 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.971858 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.972293 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.972329 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.972606 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.972791 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: E0129 15:01:32.973130 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:01:33.472945774 +0000 UTC m=+34.085773429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.973256 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.973391 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.973462 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.973584 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.961096 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-run-k8s-cni-cncf-io\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.973876 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-run-netns\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.973917 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovnkube-script-lib\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.973937 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.973959 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.973984 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a76cce43-3d01-4158-b23a-e21fd5927792-mcd-auth-proxy-config\") pod \"machine-config-daemon-7469t\" (UID: \"a76cce43-3d01-4158-b23a-e21fd5927792\") " pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974012 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f995d57b-b546-4226-83f5-3e2c1becec57-system-cni-dir\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974039 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974063 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-systemd\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974082 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974107 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-multus-conf-dir\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974127 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f995d57b-b546-4226-83f5-3e2c1becec57-os-release\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974162 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974186 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974206 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-openvswitch\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974226 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovn-node-metrics-cert\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974247 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f995d57b-b546-4226-83f5-3e2c1becec57-cni-binary-copy\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974270 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovnkube-config\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974293 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974317 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-run-netns\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974342 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a76cce43-3d01-4158-b23a-e21fd5927792-rootfs\") pod \"machine-config-daemon-7469t\" (UID: \"a76cce43-3d01-4158-b23a-e21fd5927792\") " pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974366 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/93c9842c-403b-4367-a18f-32a8fa8e58de-serviceca\") pod \"node-ca-ckzvr\" (UID: \"93c9842c-403b-4367-a18f-32a8fa8e58de\") " pod="openshift-image-registry/node-ca-ckzvr" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974340 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974385 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-run-ovn-kubernetes\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974408 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-kubelet\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974419 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974430 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-cni-netd\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974451 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-os-release\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974470 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-var-lib-cni-multus\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974488 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f66b658d-e5ec-445e-9494-0a0062e87c4c-multus-daemon-config\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974516 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974543 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-systemd-units\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974564 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-var-lib-cni-bin\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974583 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-hostroot\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974605 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-run-multus-certs\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974624 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-log-socket\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974644 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-cnibin\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974662 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f995d57b-b546-4226-83f5-3e2c1becec57-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974680 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-slash\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974707 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974733 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974774 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f66b658d-e5ec-445e-9494-0a0062e87c4c-cni-binary-copy\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974799 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f995d57b-b546-4226-83f5-3e2c1becec57-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974823 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-var-lib-kubelet\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974847 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a76cce43-3d01-4158-b23a-e21fd5927792-proxy-tls\") pod \"machine-config-daemon-7469t\" (UID: \"a76cce43-3d01-4158-b23a-e21fd5927792\") " pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974865 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93c9842c-403b-4367-a18f-32a8fa8e58de-host\") pod \"node-ca-ckzvr\" (UID: \"93c9842c-403b-4367-a18f-32a8fa8e58de\") " pod="openshift-image-registry/node-ca-ckzvr" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974884 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-var-lib-openvswitch\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974906 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-cni-bin\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974929 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-multus-cni-dir\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974949 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kglgc\" (UniqueName: \"kubernetes.io/projected/a76cce43-3d01-4158-b23a-e21fd5927792-kube-api-access-kglgc\") pod \"machine-config-daemon-7469t\" (UID: \"a76cce43-3d01-4158-b23a-e21fd5927792\") " pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974972 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-422fh\" (UniqueName: \"kubernetes.io/projected/3805ba80-f304-49f1-8e23-e25e0a1ff177-kube-api-access-422fh\") pod \"node-resolver-kqpq8\" (UID: \"3805ba80-f304-49f1-8e23-e25e0a1ff177\") " pod="openshift-dns/node-resolver-kqpq8" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.974994 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-ovn\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975015 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbvrt\" (UniqueName: \"kubernetes.io/projected/fa9cbed4-05b4-48af-81c2-9f8903dc765e-kube-api-access-bbvrt\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975040 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975061 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975080 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs7pp\" (UniqueName: \"kubernetes.io/projected/f66b658d-e5ec-445e-9494-0a0062e87c4c-kube-api-access-zs7pp\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975101 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f995d57b-b546-4226-83f5-3e2c1becec57-cnibin\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975123 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvp74\" (UniqueName: \"kubernetes.io/projected/f995d57b-b546-4226-83f5-3e2c1becec57-kube-api-access-wvp74\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975149 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6btr\" (UniqueName: \"kubernetes.io/projected/93c9842c-403b-4367-a18f-32a8fa8e58de-kube-api-access-p6btr\") pod \"node-ca-ckzvr\" (UID: \"93c9842c-403b-4367-a18f-32a8fa8e58de\") " pod="openshift-image-registry/node-ca-ckzvr" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975171 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-etc-kubernetes\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975192 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975214 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3805ba80-f304-49f1-8e23-e25e0a1ff177-hosts-file\") pod \"node-resolver-kqpq8\" (UID: \"3805ba80-f304-49f1-8e23-e25e0a1ff177\") " pod="openshift-dns/node-resolver-kqpq8" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975234 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-etc-openvswitch\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975254 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-multus-socket-dir-parent\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975278 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975307 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-node-log\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975340 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975374 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-system-cni-dir\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975397 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975478 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975493 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975505 4620 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975520 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975534 4620 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975545 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975556 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975571 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975582 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975598 4620 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975613 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975632 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975647 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975706 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975722 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975745 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975776 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975794 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975848 4620 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975869 4620 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975887 4620 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975903 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975928 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975954 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975969 4620 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975986 4620 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.975843 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.976016 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.976344 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.976624 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.977271 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.977522 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.977744 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.978552 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.980489 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.981205 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.981463 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.981998 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.986240 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.986545 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.986824 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.987349 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.987646 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.988159 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.988887 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.989163 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.990983 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.991075 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.991981 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.993019 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.993149 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.993284 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.993317 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.993550 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.993809 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.993894 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.993931 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.993950 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.994030 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.991858 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.994487 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.993388 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.995436 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.995524 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.995839 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.996081 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.996038 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.996153 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.996245 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.996567 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.996680 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.996941 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.997136 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.997231 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.997283 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.997721 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.997737 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.997712 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.998157 4620 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.998522 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.998660 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.998803 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.999588 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.999856 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:32 crc kubenswrapper[4620]: I0129 15:01:32.999917 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:32.999998 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.000387 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.000412 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.001198 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.001534 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.000962 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.001746 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.002097 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.001627 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.002547 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.002583 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.002839 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.002862 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.003088 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.003112 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.003109 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.003197 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.004077 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.004185 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.004691 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.004966 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.005518 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.005633 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: E0129 15:01:33.006323 4620 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.006493 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:01:33 crc kubenswrapper[4620]: E0129 15:01:33.006595 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:33.506575664 +0000 UTC m=+34.119403309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.006949 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.007074 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.007312 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.008232 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.008605 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.008904 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.012366 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.014923 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.019362 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.019576 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.021936 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.027429 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:01:33 crc kubenswrapper[4620]: I0129 15:01:33.033748 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.032624 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.033119 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.033180 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.033769 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.033726 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.034463 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.034945 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.033436 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.034986 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.033700 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.035026 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.035073 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.035243 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.034665 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.035118 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.035796 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.036164 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.036170 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.036222 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.036426 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.036473 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.036495 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.036668 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.036819 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.037087 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.037210 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.037276 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.037337 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.037409 4620 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.037517 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.037627 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.037668 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.037859 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.038099 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.038514 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.038792 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.038924 4620 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.039123 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.038926 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.039130 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.039191 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.039445 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.039599 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.039603 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.038988 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.039773 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.039932 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.039970 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.040191 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.040231 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.040258 4620 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.040426 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.037658 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.040779 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.041224 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.040842 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.040894 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.043876 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.045235 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:34.545208454 +0000 UTC m=+35.158036099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.045704 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:34.5456836 +0000 UTC m=+35.158511245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.046242 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.046376 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:34.546343301 +0000 UTC m=+35.159170946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.046650 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.046842 4620 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.047337 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.047580 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.047971 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.048217 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.048133 4620 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~secret/encryption-config Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.046464 4620 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~projected/kube-api-access-6g6sz Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.048574 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:01:35.04853937 +0000 UTC m=+35.661367015 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.048796 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.048838 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.048861 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.048880 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.048901 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.048927 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049032 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-openvswitch\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049080 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovn-node-metrics-cert\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049099 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f995d57b-b546-4226-83f5-3e2c1becec57-cni-binary-copy\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049135 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-run-netns\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049158 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a76cce43-3d01-4158-b23a-e21fd5927792-rootfs\") pod \"machine-config-daemon-7469t\" (UID: \"a76cce43-3d01-4158-b23a-e21fd5927792\") " pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049179 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/93c9842c-403b-4367-a18f-32a8fa8e58de-serviceca\") pod \"node-ca-ckzvr\" (UID: \"93c9842c-403b-4367-a18f-32a8fa8e58de\") " pod="openshift-image-registry/node-ca-ckzvr" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049197 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-run-ovn-kubernetes\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049215 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovnkube-config\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049245 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-kubelet\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049264 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-cni-netd\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049307 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f66b658d-e5ec-445e-9494-0a0062e87c4c-multus-daemon-config\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049330 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-os-release\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049349 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-var-lib-cni-multus\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049367 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-hostroot\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049388 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-run-multus-certs\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049411 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-systemd-units\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049432 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-var-lib-cni-bin\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049451 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f995d57b-b546-4226-83f5-3e2c1becec57-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049468 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-slash\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049487 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-log-socket\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049505 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-cnibin\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049543 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f66b658d-e5ec-445e-9494-0a0062e87c4c-cni-binary-copy\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049567 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f995d57b-b546-4226-83f5-3e2c1becec57-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049594 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93c9842c-403b-4367-a18f-32a8fa8e58de-host\") pod \"node-ca-ckzvr\" (UID: \"93c9842c-403b-4367-a18f-32a8fa8e58de\") " pod="openshift-image-registry/node-ca-ckzvr" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049614 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-var-lib-kubelet\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049632 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a76cce43-3d01-4158-b23a-e21fd5927792-proxy-tls\") pod \"machine-config-daemon-7469t\" (UID: \"a76cce43-3d01-4158-b23a-e21fd5927792\") " pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049649 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-multus-cni-dir\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049676 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kglgc\" (UniqueName: \"kubernetes.io/projected/a76cce43-3d01-4158-b23a-e21fd5927792-kube-api-access-kglgc\") pod \"machine-config-daemon-7469t\" (UID: \"a76cce43-3d01-4158-b23a-e21fd5927792\") " pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049695 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-422fh\" (UniqueName: \"kubernetes.io/projected/3805ba80-f304-49f1-8e23-e25e0a1ff177-kube-api-access-422fh\") pod \"node-resolver-kqpq8\" (UID: \"3805ba80-f304-49f1-8e23-e25e0a1ff177\") " pod="openshift-dns/node-resolver-kqpq8" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049712 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-var-lib-openvswitch\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049730 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-cni-bin\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049749 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049789 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs7pp\" (UniqueName: \"kubernetes.io/projected/f66b658d-e5ec-445e-9494-0a0062e87c4c-kube-api-access-zs7pp\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049810 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f995d57b-b546-4226-83f5-3e2c1becec57-cnibin\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049829 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvp74\" (UniqueName: \"kubernetes.io/projected/f995d57b-b546-4226-83f5-3e2c1becec57-kube-api-access-wvp74\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049849 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-ovn\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049873 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbvrt\" (UniqueName: \"kubernetes.io/projected/fa9cbed4-05b4-48af-81c2-9f8903dc765e-kube-api-access-bbvrt\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049890 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6btr\" (UniqueName: \"kubernetes.io/projected/93c9842c-403b-4367-a18f-32a8fa8e58de-kube-api-access-p6btr\") pod \"node-ca-ckzvr\" (UID: \"93c9842c-403b-4367-a18f-32a8fa8e58de\") " pod="openshift-image-registry/node-ca-ckzvr" Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.049875 4620 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes/kubernetes.io~projected/kube-api-access-xcgwh Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049947 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-etc-kubernetes\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049978 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049960 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050001 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.048685 4620 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~secret/console-oauth-config Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.048692 4620 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes/kubernetes.io~projected/kube-api-access-jhbk2 Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050041 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050023 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.050073 4620 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes/kubernetes.io~configmap/cni-binary-copy Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050082 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.049908 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-etc-kubernetes\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050110 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-etc-openvswitch\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050128 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-multus-socket-dir-parent\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050145 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050171 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3805ba80-f304-49f1-8e23-e25e0a1ff177-hosts-file\") pod \"node-resolver-kqpq8\" (UID: \"3805ba80-f304-49f1-8e23-e25e0a1ff177\") " pod="openshift-dns/node-resolver-kqpq8" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050216 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-system-cni-dir\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050238 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050256 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-node-log\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050283 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-run-netns\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050324 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-env-overrides\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050344 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-run-k8s-cni-cncf-io\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050364 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovnkube-script-lib\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050390 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a76cce43-3d01-4158-b23a-e21fd5927792-mcd-auth-proxy-config\") pod \"machine-config-daemon-7469t\" (UID: \"a76cce43-3d01-4158-b23a-e21fd5927792\") " pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050411 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f995d57b-b546-4226-83f5-3e2c1becec57-system-cni-dir\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050432 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-systemd\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050451 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050470 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-multus-conf-dir\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050490 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f995d57b-b546-4226-83f5-3e2c1becec57-os-release\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050555 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.050618 4620 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~projected/kube-api-access-9xfj7 Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050627 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.050670 4620 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050677 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.050719 4620 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~projected/kube-api-access-d4lsv Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050727 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.050838 4620 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes/kubernetes.io~configmap/env-overrides Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050847 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.050904 4620 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~secret/metrics-tls Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050902 4620 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050938 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-openvswitch\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.050915 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.051935 4620 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~secret/stats-auth Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.051951 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.051994 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.052003 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.051959 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.053198 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:46:16.681932283 +0000 UTC Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.053377 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f995d57b-b546-4226-83f5-3e2c1becec57-os-release\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.053430 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.053485 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.053577 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.053912 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-systemd-units\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.053970 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-run-netns\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.053974 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.054128 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-var-lib-cni-bin\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.054233 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.054416 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f995d57b-b546-4226-83f5-3e2c1becec57-cni-binary-copy\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.054478 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-var-lib-openvswitch\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.054500 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-cni-bin\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.054554 4620 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.054596 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-cni-netd\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.054605 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:35.054590581 +0000 UTC m=+35.667418216 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.054626 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.054879 4620 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.055060 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f995d57b-b546-4226-83f5-3e2c1becec57-cnibin\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.055234 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-ovn\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.055256 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-run-k8s-cni-cncf-io\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.055283 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3805ba80-f304-49f1-8e23-e25e0a1ff177-hosts-file\") pod \"node-resolver-kqpq8\" (UID: \"3805ba80-f304-49f1-8e23-e25e0a1ff177\") " pod="openshift-dns/node-resolver-kqpq8" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.055411 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f995d57b-b546-4226-83f5-3e2c1becec57-system-cni-dir\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.055517 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-system-cni-dir\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.055535 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.056201 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a76cce43-3d01-4158-b23a-e21fd5927792-rootfs\") pod \"machine-config-daemon-7469t\" (UID: \"a76cce43-3d01-4158-b23a-e21fd5927792\") " pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.057030 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f66b658d-e5ec-445e-9494-0a0062e87c4c-multus-daemon-config\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.057100 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-os-release\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.057129 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-var-lib-cni-multus\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.057155 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-hostroot\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.057179 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-run-multus-certs\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.058657 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f995d57b-b546-4226-83f5-3e2c1becec57-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.058706 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/93c9842c-403b-4367-a18f-32a8fa8e58de-host\") pod \"node-ca-ckzvr\" (UID: \"93c9842c-403b-4367-a18f-32a8fa8e58de\") " pod="openshift-image-registry/node-ca-ckzvr" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.058906 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f995d57b-b546-4226-83f5-3e2c1becec57-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.058774 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-host-var-lib-kubelet\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.058947 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-slash\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.058975 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-log-socket\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.059010 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-cnibin\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.059543 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f66b658d-e5ec-445e-9494-0a0062e87c4c-cni-binary-copy\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.075441 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-multus-cni-dir\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.075528 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.075871 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-node-log\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.075906 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-run-netns\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.075943 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-etc-openvswitch\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.076543 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a76cce43-3d01-4158-b23a-e21fd5927792-proxy-tls\") pod \"machine-config-daemon-7469t\" (UID: \"a76cce43-3d01-4158-b23a-e21fd5927792\") " pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.076749 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a76cce43-3d01-4158-b23a-e21fd5927792-mcd-auth-proxy-config\") pod \"machine-config-daemon-7469t\" (UID: \"a76cce43-3d01-4158-b23a-e21fd5927792\") " pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.076835 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-multus-socket-dir-parent\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.077747 4620 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.080208 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.080217 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-run-ovn-kubernetes\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.080292 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.081160 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-kubelet\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.081173 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-422fh\" (UniqueName: \"kubernetes.io/projected/3805ba80-f304-49f1-8e23-e25e0a1ff177-kube-api-access-422fh\") pod \"node-resolver-kqpq8\" (UID: \"3805ba80-f304-49f1-8e23-e25e0a1ff177\") " pod="openshift-dns/node-resolver-kqpq8" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.081256 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.081468 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f66b658d-e5ec-445e-9494-0a0062e87c4c-multus-conf-dir\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.082123 4620 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.082205 4620 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.082491 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovn-node-metrics-cert\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.082959 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/93c9842c-403b-4367-a18f-32a8fa8e58de-serviceca\") pod \"node-ca-ckzvr\" (UID: \"93c9842c-403b-4367-a18f-32a8fa8e58de\") " pod="openshift-image-registry/node-ca-ckzvr" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083004 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-systemd\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083075 4620 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083104 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083134 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083162 4620 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083177 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083191 4620 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083208 4620 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083225 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083237 4620 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083248 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083258 4620 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083267 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083277 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083290 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083309 4620 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083325 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083338 4620 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083350 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083762 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083785 4620 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083797 4620 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083810 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083821 4620 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083835 4620 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083845 4620 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083857 4620 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083868 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083892 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083902 4620 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083912 4620 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083923 4620 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083937 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083933 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kglgc\" (UniqueName: \"kubernetes.io/projected/a76cce43-3d01-4158-b23a-e21fd5927792-kube-api-access-kglgc\") pod \"machine-config-daemon-7469t\" (UID: \"a76cce43-3d01-4158-b23a-e21fd5927792\") " pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083949 4620 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.083997 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084011 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084023 4620 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084034 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084053 4620 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084072 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084087 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084101 4620 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084117 4620 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084130 4620 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084139 4620 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084142 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084169 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084180 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084150 4620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084199 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084199 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084213 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084223 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084239 4620 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084248 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084258 4620 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084268 4620 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084280 4620 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084291 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084213 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:34Z","lastTransitionTime":"2026-01-29T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084814 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084854 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.084301 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086000 4620 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086020 4620 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086041 4620 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086059 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086075 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086089 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086107 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086125 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086139 4620 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086152 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086164 4620 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086175 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086187 4620 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086200 4620 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086213 4620 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086226 4620 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086267 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086285 4620 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086306 4620 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086319 4620 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086334 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086346 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086359 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086374 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086393 4620 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086413 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086434 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086449 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086462 4620 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086475 4620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086490 4620 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086502 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086517 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086530 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086541 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086391 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086555 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086667 4620 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086687 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086722 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086733 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086749 4620 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086807 4620 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086821 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086870 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086884 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086896 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086907 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086917 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086961 4620 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086972 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086983 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.086994 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087124 4620 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087138 4620 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087153 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087166 4620 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087178 4620 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087214 4620 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087225 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087236 4620 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087246 4620 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087256 4620 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087287 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087297 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087307 4620 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087317 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087327 4620 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087337 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087366 4620 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087380 4620 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087389 4620 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087398 4620 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087407 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087415 4620 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087445 4620 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087458 4620 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087467 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087477 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.087487 4620 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.090507 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6btr\" (UniqueName: \"kubernetes.io/projected/93c9842c-403b-4367-a18f-32a8fa8e58de-kube-api-access-p6btr\") pod \"node-ca-ckzvr\" (UID: \"93c9842c-403b-4367-a18f-32a8fa8e58de\") " pod="openshift-image-registry/node-ca-ckzvr" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.095484 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs7pp\" (UniqueName: \"kubernetes.io/projected/f66b658d-e5ec-445e-9494-0a0062e87c4c-kube-api-access-zs7pp\") pod \"multus-tlwgt\" (UID: \"f66b658d-e5ec-445e-9494-0a0062e87c4c\") " pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.097604 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.098821 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvp74\" (UniqueName: \"kubernetes.io/projected/f995d57b-b546-4226-83f5-3e2c1becec57-kube-api-access-wvp74\") pod \"multus-additional-cni-plugins-hpt9v\" (UID: \"f995d57b-b546-4226-83f5-3e2c1becec57\") " pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.102005 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.106301 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.106342 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.106355 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.106380 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.106391 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:34Z","lastTransitionTime":"2026-01-29T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.108872 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.110051 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.119178 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.122174 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.124414 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.124553 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.124625 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.124699 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.124786 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:34Z","lastTransitionTime":"2026-01-29T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.138841 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.144539 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.144587 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.144603 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.144624 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.144640 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:34Z","lastTransitionTime":"2026-01-29T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.156398 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.161371 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.161633 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.161720 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.161875 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.161979 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:34Z","lastTransitionTime":"2026-01-29T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.174334 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.174735 4620 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.176924 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.176957 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.176968 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.176989 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.177003 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:34Z","lastTransitionTime":"2026-01-29T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.189443 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.189475 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.189494 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.189517 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.189530 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.189543 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.265309 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.265339 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.265798 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.265830 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.265967 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.266568 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.279968 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.280096 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.280180 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.280281 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.280368 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:34Z","lastTransitionTime":"2026-01-29T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.290620 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.290820 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.290968 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.291195 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.291300 4620 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.291317 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.291628 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.291652 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.291664 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.291673 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.291682 4620 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.291692 4620 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.291702 4620 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.291712 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.291722 4620 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.291876 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.292132 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.292459 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.292576 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.292944 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.293076 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.293281 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.293821 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.294162 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.294806 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.299122 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.311412 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.313336 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.317023 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovnkube-config\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.317042 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-env-overrides\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.318720 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovnkube-script-lib\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.325480 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbvrt\" (UniqueName: \"kubernetes.io/projected/fa9cbed4-05b4-48af-81c2-9f8903dc765e-kube-api-access-bbvrt\") pod \"ovnkube-node-ks4d9\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.327289 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.332052 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.332219 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.350083 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.354867 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.357779 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.364399 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-34890c604e52fc8a0088edbd0e9fce4c17ba5cbe76e4a4ceeb2a0eb7a88ec74c WatchSource:0}: Error finding container 34890c604e52fc8a0088edbd0e9fce4c17ba5cbe76e4a4ceeb2a0eb7a88ec74c: Status 404 returned error can't find the container with id 34890c604e52fc8a0088edbd0e9fce4c17ba5cbe76e4a4ceeb2a0eb7a88ec74c Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.365103 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.370935 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-27d56d02b74d3a7770567edd4f98dad860c6c1fbfeb70e103822111dc0794e81 WatchSource:0}: Error finding container 27d56d02b74d3a7770567edd4f98dad860c6c1fbfeb70e103822111dc0794e81: Status 404 returned error can't find the container with id 27d56d02b74d3a7770567edd4f98dad860c6c1fbfeb70e103822111dc0794e81 Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.376821 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kqpq8" Jan 29 15:01:34 crc kubenswrapper[4620]: W0129 15:01:34.378406 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4478c1c442fc88245774f346598f84daa3bc14ca34334a2345d06ac34007c8a3 WatchSource:0}: Error finding container 4478c1c442fc88245774f346598f84daa3bc14ca34334a2345d06ac34007c8a3: Status 404 returned error can't find the container with id 4478c1c442fc88245774f346598f84daa3bc14ca34334a2345d06ac34007c8a3 Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.382696 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.382747 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.382810 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.382829 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.382843 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:34Z","lastTransitionTime":"2026-01-29T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.386263 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ckzvr" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393128 4620 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393157 4620 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393173 4620 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393185 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393198 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393210 4620 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393221 4620 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393234 4620 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393245 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393258 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393270 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393281 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393293 4620 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393304 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393315 4620 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393326 4620 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.393337 4620 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.395015 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tlwgt" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.425066 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.485876 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.485925 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.485936 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.485957 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.485968 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:34Z","lastTransitionTime":"2026-01-29T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.588520 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.588568 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.588580 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.588602 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.588617 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:34Z","lastTransitionTime":"2026-01-29T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.595001 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.595078 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.595104 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.595239 4620 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.595259 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.595297 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.595316 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:35.595296176 +0000 UTC m=+36.208123821 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.595259 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.595342 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.595352 4620 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.595428 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:35.595400589 +0000 UTC m=+36.208228404 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.595321 4620 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.595567 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:35.595537083 +0000 UTC m=+36.208364728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.691930 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.691996 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.692009 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.692043 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.692059 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:34Z","lastTransitionTime":"2026-01-29T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.796873 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.797239 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.797259 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.797281 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.797296 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:34Z","lastTransitionTime":"2026-01-29T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.871797 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.871797 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.872026 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:34 crc kubenswrapper[4620]: E0129 15:01:34.871985 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.876538 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.877121 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.878313 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.879020 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.879804 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.880409 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.881045 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.882650 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.883293 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.884810 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.885463 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.886637 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.887209 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.887707 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.888651 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.889244 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.890243 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.890656 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.891288 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.892397 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.892968 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.893937 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.894359 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.895360 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.895863 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.896454 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.897471 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.897963 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.899108 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.899599 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.900152 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.900190 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.900199 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.900223 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.900235 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:34Z","lastTransitionTime":"2026-01-29T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.902118 4620 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.902281 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.904932 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.906109 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.908022 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.910962 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.911839 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.913150 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.913987 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.915384 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.916068 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.917686 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.918594 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.920011 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.920696 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.922160 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.923077 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.925033 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.925922 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.927438 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.928186 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.929460 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.930335 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 15:01:34 crc kubenswrapper[4620]: I0129 15:01:34.931190 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.003640 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.003691 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.003704 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.003725 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.003740 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:35Z","lastTransitionTime":"2026-01-29T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.053322 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:42:16.737346414 +0000 UTC Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.055099 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"34890c604e52fc8a0088edbd0e9fce4c17ba5cbe76e4a4ceeb2a0eb7a88ec74c"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.056663 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" event={"ID":"f995d57b-b546-4226-83f5-3e2c1becec57","Type":"ContainerStarted","Data":"b9454e8ce8e7a9e57967485f85b02a1367c0bad3b72df6e72a048b08af3cb685"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.057913 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tlwgt" event={"ID":"f66b658d-e5ec-445e-9494-0a0062e87c4c","Type":"ContainerStarted","Data":"53ea35064dc6341f236f34c0284901b7a273195486d35cf7e1c112c5d16582e2"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.059123 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"dd4e7b1d36aecdd297df0d78b858d34e8a91ebbcdc2d3fd2b44e537dfc86c54f"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.060252 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kqpq8" event={"ID":"3805ba80-f304-49f1-8e23-e25e0a1ff177","Type":"ContainerStarted","Data":"2bce48deae333f55ac91457f6afb7f3bc071960a3352fae2b48bdedc9cce228a"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.061514 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4478c1c442fc88245774f346598f84daa3bc14ca34334a2345d06ac34007c8a3"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.062722 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ckzvr" event={"ID":"93c9842c-403b-4367-a18f-32a8fa8e58de","Type":"ContainerStarted","Data":"5c5732ff0034d18d42b9797196881fec2fd5d7d5ff8dfb98d001541563933959"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.064626 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"27d56d02b74d3a7770567edd4f98dad860c6c1fbfeb70e103822111dc0794e81"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.101409 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.101595 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.101629 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:01:37.101585595 +0000 UTC m=+37.714413240 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.101859 4620 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.102008 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:37.101976658 +0000 UTC m=+37.714804483 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.106688 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.106746 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.106776 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.106796 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.106808 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:35Z","lastTransitionTime":"2026-01-29T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.209139 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.209200 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.209219 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.209249 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.209266 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:35Z","lastTransitionTime":"2026-01-29T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.311951 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.311997 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.312013 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.312034 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.312048 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:35Z","lastTransitionTime":"2026-01-29T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.414190 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.414231 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.414241 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.414257 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.414266 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:35Z","lastTransitionTime":"2026-01-29T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.517135 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.517247 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.517267 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.517292 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.517308 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:35Z","lastTransitionTime":"2026-01-29T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.607222 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.607292 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.607346 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.607448 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.607499 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.607514 4620 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.607580 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:37.607561844 +0000 UTC m=+38.220389499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.607589 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.607619 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.607638 4620 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.607710 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:37.607685398 +0000 UTC m=+38.220513083 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.607821 4620 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.607864 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:37.607851463 +0000 UTC m=+38.220679148 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.622469 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.622515 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.622534 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.622557 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.622576 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:35Z","lastTransitionTime":"2026-01-29T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.726412 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.726475 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.726493 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.726526 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.726546 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:35Z","lastTransitionTime":"2026-01-29T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.829441 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.829818 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.830881 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.830946 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.830974 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:35Z","lastTransitionTime":"2026-01-29T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.872288 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:35 crc kubenswrapper[4620]: E0129 15:01:35.872536 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.933864 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.933912 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.933923 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.933944 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:35 crc kubenswrapper[4620]: I0129 15:01:35.933961 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:35Z","lastTransitionTime":"2026-01-29T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.037089 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.037155 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.037170 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.037192 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.037206 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:36Z","lastTransitionTime":"2026-01-29T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.054040 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:49:09.812119418 +0000 UTC Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.067839 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"1fe11e6e71e3bc2a90b15dfdad5e1734964f582922f5c79e6600685b06c40ec1"} Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.141005 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.141051 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.141061 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.141076 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.141087 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:36Z","lastTransitionTime":"2026-01-29T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.243678 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.243723 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.243736 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.243780 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.243795 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:36Z","lastTransitionTime":"2026-01-29T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.346633 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.346677 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.346688 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.346705 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.346716 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:36Z","lastTransitionTime":"2026-01-29T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.449424 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.449749 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.449977 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.450190 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.450314 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:36Z","lastTransitionTime":"2026-01-29T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.553089 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.553134 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.553142 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.553155 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.553165 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:36Z","lastTransitionTime":"2026-01-29T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.655647 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.655846 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.655863 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.655877 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.655886 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:36Z","lastTransitionTime":"2026-01-29T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.758850 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.758903 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.758915 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.758937 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.758953 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:36Z","lastTransitionTime":"2026-01-29T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.861715 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.861838 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.861857 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.861876 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.861924 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:36Z","lastTransitionTime":"2026-01-29T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.872420 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.872544 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:36 crc kubenswrapper[4620]: E0129 15:01:36.872737 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:36 crc kubenswrapper[4620]: E0129 15:01:36.873194 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.964140 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.964197 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.964218 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.964344 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:36 crc kubenswrapper[4620]: I0129 15:01:36.964357 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:36Z","lastTransitionTime":"2026-01-29T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.055183 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:25:27.397381121 +0000 UTC Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.067212 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.067510 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.067600 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.067666 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.067779 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:37Z","lastTransitionTime":"2026-01-29T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.126145 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.126506 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.126700 4620 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.126882 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:41.126864205 +0000 UTC m=+41.739691850 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.127033 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:01:41.126996589 +0000 UTC m=+41.739824244 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.171007 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.171057 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.171068 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.171117 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.171134 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:37Z","lastTransitionTime":"2026-01-29T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.274868 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.274922 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.274933 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.274954 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.274970 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:37Z","lastTransitionTime":"2026-01-29T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.378791 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.378858 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.378874 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.378900 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.378918 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:37Z","lastTransitionTime":"2026-01-29T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.481824 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.482102 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.482205 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.482390 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.482478 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:37Z","lastTransitionTime":"2026-01-29T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.584526 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.584582 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.584594 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.584619 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.584641 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:37Z","lastTransitionTime":"2026-01-29T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.633926 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.633973 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.634000 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.634131 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.634130 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.634174 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.634184 4620 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.634189 4620 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.634228 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:41.634214988 +0000 UTC m=+42.247042633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.634146 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.634245 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:41.634236289 +0000 UTC m=+42.247063934 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.634248 4620 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.634318 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:41.63427175 +0000 UTC m=+42.247099505 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.686917 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.686972 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.686985 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.687004 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.687017 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:37Z","lastTransitionTime":"2026-01-29T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.790096 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.790135 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.790146 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.790161 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.790173 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:37Z","lastTransitionTime":"2026-01-29T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.871635 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:37 crc kubenswrapper[4620]: E0129 15:01:37.871971 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.892522 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.892862 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.893053 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.893247 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.893705 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:37Z","lastTransitionTime":"2026-01-29T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.996362 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.996605 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.996678 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.996790 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:37 crc kubenswrapper[4620]: I0129 15:01:37.996879 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:37Z","lastTransitionTime":"2026-01-29T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.055998 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:29:39.199832362 +0000 UTC Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.099122 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.099375 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.099440 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.099529 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.099615 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:38Z","lastTransitionTime":"2026-01-29T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.202456 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.202525 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.202548 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.202585 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.202654 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:38Z","lastTransitionTime":"2026-01-29T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.305784 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.305848 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.305860 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.305882 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.305898 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:38Z","lastTransitionTime":"2026-01-29T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.409483 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.409561 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.409578 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.409601 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.409615 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:38Z","lastTransitionTime":"2026-01-29T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.513252 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.513330 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.513344 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.513363 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.513376 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:38Z","lastTransitionTime":"2026-01-29T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.618527 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.618577 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.618607 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.618630 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.618647 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:38Z","lastTransitionTime":"2026-01-29T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.721435 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.721495 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.721523 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.721536 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.721544 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:38Z","lastTransitionTime":"2026-01-29T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.825087 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.825190 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.825207 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.825232 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.825246 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:38Z","lastTransitionTime":"2026-01-29T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.871618 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:38 crc kubenswrapper[4620]: E0129 15:01:38.871852 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.871639 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:38 crc kubenswrapper[4620]: E0129 15:01:38.872475 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.927682 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.927810 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.927828 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.927851 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:38 crc kubenswrapper[4620]: I0129 15:01:38.927872 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:38Z","lastTransitionTime":"2026-01-29T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.031533 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.031807 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.031887 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.031966 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.032040 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:39Z","lastTransitionTime":"2026-01-29T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.057567 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 02:48:30.341627176 +0000 UTC Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.135577 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.135927 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.136045 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.136169 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.136292 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:39Z","lastTransitionTime":"2026-01-29T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.238597 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.238638 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.238649 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.238664 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.238675 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:39Z","lastTransitionTime":"2026-01-29T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.341390 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.341635 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.341711 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.341824 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.341897 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:39Z","lastTransitionTime":"2026-01-29T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.444249 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.444607 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.445026 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.445466 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.445668 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:39Z","lastTransitionTime":"2026-01-29T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.549723 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.549792 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.549803 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.549832 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.549847 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:39Z","lastTransitionTime":"2026-01-29T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.653429 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.653746 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.654543 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.654597 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.654612 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:39Z","lastTransitionTime":"2026-01-29T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.757618 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.757961 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.758048 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.758134 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.758215 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:39Z","lastTransitionTime":"2026-01-29T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.860961 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.861257 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.861363 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.861483 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.861578 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:39Z","lastTransitionTime":"2026-01-29T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.872270 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:39 crc kubenswrapper[4620]: E0129 15:01:39.872449 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.964501 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.964557 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.964568 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.964585 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:39 crc kubenswrapper[4620]: I0129 15:01:39.964598 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:39Z","lastTransitionTime":"2026-01-29T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.058203 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:45:45.091583407 +0000 UTC Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.069298 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr"] Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.069505 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.069846 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.069954 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.070100 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.070162 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:40Z","lastTransitionTime":"2026-01-29T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.071479 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.075814 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.076307 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.089210 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.098359 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.109522 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.119549 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.128781 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.138917 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.150439 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.161646 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39626c5c-59f0-466e-81f3-b434bae72182-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-755xr\" (UID: \"39626c5c-59f0-466e-81f3-b434bae72182\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.161930 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39626c5c-59f0-466e-81f3-b434bae72182-env-overrides\") pod \"ovnkube-control-plane-749d76644c-755xr\" (UID: \"39626c5c-59f0-466e-81f3-b434bae72182\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.162087 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpr6\" (UniqueName: \"kubernetes.io/projected/39626c5c-59f0-466e-81f3-b434bae72182-kube-api-access-nhpr6\") pod \"ovnkube-control-plane-749d76644c-755xr\" (UID: \"39626c5c-59f0-466e-81f3-b434bae72182\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.162204 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39626c5c-59f0-466e-81f3-b434bae72182-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-755xr\" (UID: \"39626c5c-59f0-466e-81f3-b434bae72182\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.164811 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.173319 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.173370 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.173382 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.173408 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.173421 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:40Z","lastTransitionTime":"2026-01-29T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.177303 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.187967 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.198267 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.207805 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.223977 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.240926 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.263130 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39626c5c-59f0-466e-81f3-b434bae72182-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-755xr\" (UID: \"39626c5c-59f0-466e-81f3-b434bae72182\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.263190 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39626c5c-59f0-466e-81f3-b434bae72182-env-overrides\") pod \"ovnkube-control-plane-749d76644c-755xr\" (UID: \"39626c5c-59f0-466e-81f3-b434bae72182\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.263220 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhpr6\" (UniqueName: \"kubernetes.io/projected/39626c5c-59f0-466e-81f3-b434bae72182-kube-api-access-nhpr6\") pod \"ovnkube-control-plane-749d76644c-755xr\" (UID: \"39626c5c-59f0-466e-81f3-b434bae72182\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.263265 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39626c5c-59f0-466e-81f3-b434bae72182-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-755xr\" (UID: \"39626c5c-59f0-466e-81f3-b434bae72182\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.264276 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/39626c5c-59f0-466e-81f3-b434bae72182-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-755xr\" (UID: \"39626c5c-59f0-466e-81f3-b434bae72182\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.264456 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39626c5c-59f0-466e-81f3-b434bae72182-env-overrides\") pod \"ovnkube-control-plane-749d76644c-755xr\" (UID: \"39626c5c-59f0-466e-81f3-b434bae72182\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.271431 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/39626c5c-59f0-466e-81f3-b434bae72182-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-755xr\" (UID: \"39626c5c-59f0-466e-81f3-b434bae72182\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.276783 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.276838 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.276852 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.276877 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.276890 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:40Z","lastTransitionTime":"2026-01-29T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.284528 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhpr6\" (UniqueName: \"kubernetes.io/projected/39626c5c-59f0-466e-81f3-b434bae72182-kube-api-access-nhpr6\") pod \"ovnkube-control-plane-749d76644c-755xr\" (UID: \"39626c5c-59f0-466e-81f3-b434bae72182\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.379322 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.379401 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.379419 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.379440 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.379493 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:40Z","lastTransitionTime":"2026-01-29T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.385508 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.482574 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.482619 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.482633 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.482652 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.482664 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:40Z","lastTransitionTime":"2026-01-29T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.585792 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.585836 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.585845 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.585865 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.585875 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:40Z","lastTransitionTime":"2026-01-29T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.689908 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.689999 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.690015 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.690065 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.690093 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:40Z","lastTransitionTime":"2026-01-29T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.793260 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.793309 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.793319 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.793339 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.793351 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:40Z","lastTransitionTime":"2026-01-29T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.872996 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.873280 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:40 crc kubenswrapper[4620]: E0129 15:01:40.873358 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:40 crc kubenswrapper[4620]: E0129 15:01:40.873537 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.885561 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.891605 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.891862 4620 scope.go:117] "RemoveContainer" containerID="aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.898099 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.898132 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.898144 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.898163 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.898180 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:40Z","lastTransitionTime":"2026-01-29T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.909324 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.923339 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.935482 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.950041 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.962377 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.975305 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:40 crc kubenswrapper[4620]: I0129 15:01:40.988155 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.001595 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.001667 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.001678 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.001695 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.001708 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:41Z","lastTransitionTime":"2026-01-29T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.002480 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.059018 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:36:16.940236333 +0000 UTC Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.087776 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" event={"ID":"39626c5c-59f0-466e-81f3-b434bae72182","Type":"ContainerStarted","Data":"28fc32e29a9cf3d581bcee8c09647a8cf5f6009a80493a08f423633f019caef0"} Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.105162 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.105482 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.105559 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.105642 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.105715 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:41Z","lastTransitionTime":"2026-01-29T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.174214 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:01:49.174184492 +0000 UTC m=+49.787012137 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.174078 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.174932 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.175125 4620 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.175339 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:49.175325058 +0000 UTC m=+49.788152703 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.179327 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.203667 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.210076 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.210123 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.210133 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.210151 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.210161 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:41Z","lastTransitionTime":"2026-01-29T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.214380 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.228925 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.240097 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.313915 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.314280 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.314394 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.314481 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.314614 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:41Z","lastTransitionTime":"2026-01-29T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.418307 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.418685 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.418910 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.419093 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.419290 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:41Z","lastTransitionTime":"2026-01-29T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.495587 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-twqvf"] Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.496304 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.496451 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.516656 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.522222 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.522303 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.522324 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.522382 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.522404 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:41Z","lastTransitionTime":"2026-01-29T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.527845 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.536602 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.544409 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.559468 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.570174 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.581244 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.582557 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.582686 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw4k7\" (UniqueName: \"kubernetes.io/projected/82634d3f-d985-4384-bd37-426d509d4e57-kube-api-access-zw4k7\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.597511 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.607481 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.626859 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.626919 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.626932 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.626953 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.626965 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:41Z","lastTransitionTime":"2026-01-29T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.635463 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.645648 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.654317 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.660786 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.667960 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.681502 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.684175 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.684268 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.684318 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.684349 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw4k7\" (UniqueName: \"kubernetes.io/projected/82634d3f-d985-4384-bd37-426d509d4e57-kube-api-access-zw4k7\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.684372 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.684408 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.684435 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.684440 4620 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.684489 4620 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.684551 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs podName:82634d3f-d985-4384-bd37-426d509d4e57 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:42.184529439 +0000 UTC m=+42.797357084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs") pod "network-metrics-daemon-twqvf" (UID: "82634d3f-d985-4384-bd37-426d509d4e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.684551 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.684575 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.684585 4620 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.684571 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:49.68456179 +0000 UTC m=+50.297389435 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.684668 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:49.684644642 +0000 UTC m=+50.297472457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.684476 4620 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.684704 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:49.684695644 +0000 UTC m=+50.297523499 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.694980 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.711241 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw4k7\" (UniqueName: \"kubernetes.io/projected/82634d3f-d985-4384-bd37-426d509d4e57-kube-api-access-zw4k7\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.729716 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.729807 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.729826 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.729853 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.729867 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:41Z","lastTransitionTime":"2026-01-29T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.832669 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.832723 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.832738 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.832775 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.832793 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:41Z","lastTransitionTime":"2026-01-29T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.871644 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:41 crc kubenswrapper[4620]: E0129 15:01:41.871830 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.935379 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.935419 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.935428 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.935443 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:41 crc kubenswrapper[4620]: I0129 15:01:41.935453 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:41Z","lastTransitionTime":"2026-01-29T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.038447 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.038513 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.038526 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.038546 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.038574 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:42Z","lastTransitionTime":"2026-01-29T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.059945 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:12:41.993361409 +0000 UTC Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.093185 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" event={"ID":"f995d57b-b546-4226-83f5-3e2c1becec57","Type":"ContainerStarted","Data":"dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.095383 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.096736 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.098682 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kqpq8" event={"ID":"3805ba80-f304-49f1-8e23-e25e0a1ff177","Type":"ContainerStarted","Data":"136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.100958 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tlwgt" event={"ID":"f66b658d-e5ec-445e-9494-0a0062e87c4c","Type":"ContainerStarted","Data":"45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.103152 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.105042 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.106566 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.143205 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.143367 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.143436 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.143462 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.143476 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:42Z","lastTransitionTime":"2026-01-29T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.197830 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:42 crc kubenswrapper[4620]: E0129 15:01:42.198010 4620 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:01:42 crc kubenswrapper[4620]: E0129 15:01:42.198080 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs podName:82634d3f-d985-4384-bd37-426d509d4e57 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:43.198059956 +0000 UTC m=+43.810887611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs") pod "network-metrics-daemon-twqvf" (UID: "82634d3f-d985-4384-bd37-426d509d4e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.247440 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.247493 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.247507 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.247530 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.247543 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:42Z","lastTransitionTime":"2026-01-29T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.354014 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.354076 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.354094 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.354121 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.354143 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:42Z","lastTransitionTime":"2026-01-29T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.458107 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.458194 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.458234 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.458260 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.458327 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:42Z","lastTransitionTime":"2026-01-29T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.560624 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.560702 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.560714 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.560732 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.560743 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:42Z","lastTransitionTime":"2026-01-29T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.664130 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.664188 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.664205 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.664231 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.664244 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:42Z","lastTransitionTime":"2026-01-29T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.767435 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.767476 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.767488 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.767504 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.767516 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:42Z","lastTransitionTime":"2026-01-29T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.870233 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.870290 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.870301 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.870332 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.870345 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:42Z","lastTransitionTime":"2026-01-29T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.871855 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.871947 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:42 crc kubenswrapper[4620]: E0129 15:01:42.872062 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:42 crc kubenswrapper[4620]: E0129 15:01:42.872168 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.972210 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.972250 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.972259 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.972273 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:42 crc kubenswrapper[4620]: I0129 15:01:42.972282 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:42Z","lastTransitionTime":"2026-01-29T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.060114 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:59:00.558518988 +0000 UTC Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.074973 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.075011 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.075021 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.075035 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.075045 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:43Z","lastTransitionTime":"2026-01-29T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.111731 4620 generic.go:334] "Generic (PLEG): container finished" podID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerID="0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9" exitCode=0 Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.111848 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9"} Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.114337 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ckzvr" event={"ID":"93c9842c-403b-4367-a18f-32a8fa8e58de","Type":"ContainerStarted","Data":"f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587"} Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.116713 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.119255 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0"} Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.125599 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" event={"ID":"39626c5c-59f0-466e-81f3-b434bae72182","Type":"ContainerStarted","Data":"1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7"} Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.138154 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.152161 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.163548 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.173447 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.179624 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.179657 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.179700 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.179719 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.179731 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:43Z","lastTransitionTime":"2026-01-29T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.181610 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.193028 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.206648 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.209141 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:43 crc kubenswrapper[4620]: E0129 15:01:43.209468 4620 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:01:43 crc kubenswrapper[4620]: E0129 15:01:43.210986 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs podName:82634d3f-d985-4384-bd37-426d509d4e57 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:45.210963175 +0000 UTC m=+45.823790820 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs") pod "network-metrics-daemon-twqvf" (UID: "82634d3f-d985-4384-bd37-426d509d4e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.220283 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.232331 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.244509 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.258369 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.270369 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.283651 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.284282 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.284452 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.284606 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.284732 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:43Z","lastTransitionTime":"2026-01-29T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.284914 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.300397 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.314674 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.333587 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.349880 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.363523 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.373767 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.391573 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.391608 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.391617 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.391631 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.391646 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:43Z","lastTransitionTime":"2026-01-29T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.397728 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.410993 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.429587 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.441253 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.450274 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.457331 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.468077 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.478580 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.489061 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.494444 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.494532 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.494544 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.494587 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.494601 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:43Z","lastTransitionTime":"2026-01-29T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.502475 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.528427 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.542999 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.564497 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.597960 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.598037 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.598048 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.598068 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.598096 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:43Z","lastTransitionTime":"2026-01-29T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.700935 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.700977 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.700987 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.701002 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.701013 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:43Z","lastTransitionTime":"2026-01-29T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.803702 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.803777 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.803796 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.803817 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.803829 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:43Z","lastTransitionTime":"2026-01-29T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.871788 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:43 crc kubenswrapper[4620]: E0129 15:01:43.872248 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.871995 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:43 crc kubenswrapper[4620]: E0129 15:01:43.872642 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.906465 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.906500 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.906513 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.906529 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:43 crc kubenswrapper[4620]: I0129 15:01:43.906541 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:43Z","lastTransitionTime":"2026-01-29T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.023778 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.023831 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.023843 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.023864 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.023876 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.060507 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 10:23:59.585218137 +0000 UTC Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.126395 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.126438 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.126449 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.126465 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.126475 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.135484 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.138508 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" event={"ID":"39626c5c-59f0-466e-81f3-b434bae72182","Type":"ContainerStarted","Data":"89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.139901 4620 generic.go:334] "Generic (PLEG): container finished" podID="f995d57b-b546-4226-83f5-3e2c1becec57" containerID="dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689" exitCode=0 Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.139987 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" event={"ID":"f995d57b-b546-4226-83f5-3e2c1becec57","Type":"ContainerDied","Data":"dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.144208 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.145927 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.146180 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.148632 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.157235 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.166721 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.186575 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.200780 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.215276 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.227616 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.229228 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.229261 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.229272 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.229286 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.229296 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.238215 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.249336 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.260158 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.270115 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.285024 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.294657 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.302106 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.313251 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.328395 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.332047 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.332095 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.332106 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.332125 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.332137 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.339908 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.350245 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.357443 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.365709 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.374879 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.399527 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.411035 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.429367 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.436676 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.437735 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.438069 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.438271 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.438433 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.446669 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.461747 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.472435 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.472486 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.472500 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.472522 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.472535 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.474880 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: E0129 15:01:44.486045 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.486303 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.491412 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.491462 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.491485 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.491514 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.491530 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.498650 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: E0129 15:01:44.503067 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.508728 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.508932 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.509018 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.509109 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.509139 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.509307 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: E0129 15:01:44.522071 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.522385 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.529027 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.529062 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.529072 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.529087 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.529098 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.537096 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: E0129 15:01:44.539082 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.543588 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.543626 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.543642 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.543665 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.543682 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: E0129 15:01:44.555693 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:01:44 crc kubenswrapper[4620]: E0129 15:01:44.555830 4620 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.557583 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.557616 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.557626 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.557668 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.557682 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.660945 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.661002 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.661014 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.661035 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.661053 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.763936 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.763984 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.763994 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.764008 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.764017 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.866412 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.866445 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.866455 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.866472 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.866481 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.874074 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.874170 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:44 crc kubenswrapper[4620]: E0129 15:01:44.874271 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:44 crc kubenswrapper[4620]: E0129 15:01:44.874372 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.969332 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.969363 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.969372 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.969389 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:44 crc kubenswrapper[4620]: I0129 15:01:44.969401 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:44Z","lastTransitionTime":"2026-01-29T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.061350 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:08:54.065534708 +0000 UTC Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.072333 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.072372 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.072380 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.072394 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.072404 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:45Z","lastTransitionTime":"2026-01-29T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.153680 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" event={"ID":"f995d57b-b546-4226-83f5-3e2c1becec57","Type":"ContainerStarted","Data":"bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176"} Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.157643 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac"} Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.157804 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98"} Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.177289 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.177350 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.177369 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.177400 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.177429 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:45Z","lastTransitionTime":"2026-01-29T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.181640 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.212472 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.230001 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.244375 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:45 crc kubenswrapper[4620]: E0129 15:01:45.244743 4620 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:01:45 crc kubenswrapper[4620]: E0129 15:01:45.245074 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs podName:82634d3f-d985-4384-bd37-426d509d4e57 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:49.245051493 +0000 UTC m=+49.857879148 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs") pod "network-metrics-daemon-twqvf" (UID: "82634d3f-d985-4384-bd37-426d509d4e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.252434 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.270604 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.281168 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.281222 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.281243 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.281271 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.281294 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:45Z","lastTransitionTime":"2026-01-29T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.291609 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.304242 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.319583 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.333520 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.351287 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.365645 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.380528 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.383408 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.383455 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.383467 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.383487 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.383497 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:45Z","lastTransitionTime":"2026-01-29T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.397103 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.412597 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.428202 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.442848 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.487287 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.487354 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.487367 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.487387 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.487404 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:45Z","lastTransitionTime":"2026-01-29T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.590638 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.590688 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.590704 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.590728 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.590743 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:45Z","lastTransitionTime":"2026-01-29T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.694559 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.694670 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.694689 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.694720 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.694736 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:45Z","lastTransitionTime":"2026-01-29T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.804946 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.805004 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.805017 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.805035 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.805049 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:45Z","lastTransitionTime":"2026-01-29T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.871920 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:45 crc kubenswrapper[4620]: E0129 15:01:45.872051 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.872186 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:45 crc kubenswrapper[4620]: E0129 15:01:45.872375 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.907309 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.907339 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.907348 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.907361 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:45 crc kubenswrapper[4620]: I0129 15:01:45.907369 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:45Z","lastTransitionTime":"2026-01-29T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.010566 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.010639 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.010657 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.010685 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.010704 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:46Z","lastTransitionTime":"2026-01-29T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.062243 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:58:15.132459288 +0000 UTC Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.113697 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.113749 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.113778 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.113799 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.113812 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:46Z","lastTransitionTime":"2026-01-29T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.165802 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c"} Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.217414 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.217494 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.217525 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.217566 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.217592 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:46Z","lastTransitionTime":"2026-01-29T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.320688 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.320823 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.320847 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.320875 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.320899 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:46Z","lastTransitionTime":"2026-01-29T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.430182 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.430239 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.430253 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.430278 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.430293 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:46Z","lastTransitionTime":"2026-01-29T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.533119 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.533182 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.533195 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.533221 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.533235 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:46Z","lastTransitionTime":"2026-01-29T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.636038 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.636091 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.636104 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.636123 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.636135 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:46Z","lastTransitionTime":"2026-01-29T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.739667 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.739718 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.739728 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.739803 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.739818 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:46Z","lastTransitionTime":"2026-01-29T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.842926 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.843484 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.843499 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.843524 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.843537 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:46Z","lastTransitionTime":"2026-01-29T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.871917 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:46 crc kubenswrapper[4620]: E0129 15:01:46.872097 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.874051 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:46 crc kubenswrapper[4620]: E0129 15:01:46.874309 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.945917 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.946009 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.946027 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.946042 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:46 crc kubenswrapper[4620]: I0129 15:01:46.946051 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:46Z","lastTransitionTime":"2026-01-29T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.049306 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.049346 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.049356 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.049373 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.049385 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:47Z","lastTransitionTime":"2026-01-29T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.062407 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:08:20.60657376 +0000 UTC Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.152764 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.152798 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.152809 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.152826 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.152836 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:47Z","lastTransitionTime":"2026-01-29T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.175480 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b"} Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.256587 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.256638 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.256672 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.256694 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.256711 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:47Z","lastTransitionTime":"2026-01-29T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.359658 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.359704 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.359718 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.359737 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.359787 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:47Z","lastTransitionTime":"2026-01-29T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.462572 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.462611 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.462621 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.462633 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.462641 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:47Z","lastTransitionTime":"2026-01-29T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.565687 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.565724 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.565735 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.565748 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.565774 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:47Z","lastTransitionTime":"2026-01-29T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.668615 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.668681 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.668695 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.668716 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.668730 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:47Z","lastTransitionTime":"2026-01-29T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.772048 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.772092 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.772104 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.772129 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.772143 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:47Z","lastTransitionTime":"2026-01-29T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.872375 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.872694 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:47 crc kubenswrapper[4620]: E0129 15:01:47.872837 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:01:47 crc kubenswrapper[4620]: E0129 15:01:47.873035 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.875196 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.875237 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.875247 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.875264 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.875272 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:47Z","lastTransitionTime":"2026-01-29T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.977842 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.977886 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.977899 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.977918 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:47 crc kubenswrapper[4620]: I0129 15:01:47.977932 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:47Z","lastTransitionTime":"2026-01-29T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.063456 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:57:57.974443058 +0000 UTC Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.081411 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.081463 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.081476 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.081497 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.081512 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:48Z","lastTransitionTime":"2026-01-29T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.184853 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.184913 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.184926 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.184951 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.184968 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:48Z","lastTransitionTime":"2026-01-29T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.289283 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.289334 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.289350 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.289373 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.289387 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:48Z","lastTransitionTime":"2026-01-29T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.392534 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.392590 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.392599 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.392623 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.392635 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:48Z","lastTransitionTime":"2026-01-29T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.495403 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.495444 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.495457 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.495476 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.495489 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:48Z","lastTransitionTime":"2026-01-29T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.598453 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.598505 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.598519 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.598536 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.598550 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:48Z","lastTransitionTime":"2026-01-29T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.702106 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.702167 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.702181 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.702205 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.702219 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:48Z","lastTransitionTime":"2026-01-29T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.807270 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.807307 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.807316 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.807333 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.807345 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:48Z","lastTransitionTime":"2026-01-29T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.873022 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:48 crc kubenswrapper[4620]: E0129 15:01:48.873195 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.873391 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:48 crc kubenswrapper[4620]: E0129 15:01:48.873616 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.911178 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.911258 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.911271 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.911292 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:48 crc kubenswrapper[4620]: I0129 15:01:48.911304 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:48Z","lastTransitionTime":"2026-01-29T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.015059 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.015131 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.015142 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.015162 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.015174 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:49Z","lastTransitionTime":"2026-01-29T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.064671 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 07:30:28.16255409 +0000 UTC Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.119570 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.119657 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.119672 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.119698 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.119715 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:49Z","lastTransitionTime":"2026-01-29T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.192734 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.192850 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.193001 4620 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.193015 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:02:05.192978396 +0000 UTC m=+65.805806041 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.193062 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:02:05.193050738 +0000 UTC m=+65.805878373 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.193471 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4"} Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.195326 4620 generic.go:334] "Generic (PLEG): container finished" podID="f995d57b-b546-4226-83f5-3e2c1becec57" containerID="bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176" exitCode=0 Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.195360 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" event={"ID":"f995d57b-b546-4226-83f5-3e2c1becec57","Type":"ContainerDied","Data":"bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176"} Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.211276 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.223229 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.223294 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.223307 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.223331 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.223347 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:49Z","lastTransitionTime":"2026-01-29T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.227501 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.244544 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.260576 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.274971 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.293520 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.294838 4620 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.294934 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs podName:82634d3f-d985-4384-bd37-426d509d4e57 nodeName:}" failed. No retries permitted until 2026-01-29 15:01:57.294910551 +0000 UTC m=+57.907738196 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs") pod "network-metrics-daemon-twqvf" (UID: "82634d3f-d985-4384-bd37-426d509d4e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.295068 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.308832 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.326489 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.326546 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.326562 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.326589 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.326604 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:49Z","lastTransitionTime":"2026-01-29T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.327694 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.345634 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.359985 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.374841 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.388872 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.406291 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.426541 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.432005 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.432028 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.432037 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.432050 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.432060 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:49Z","lastTransitionTime":"2026-01-29T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.443571 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.457426 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.535561 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.535601 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.535612 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.535632 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.535648 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:49Z","lastTransitionTime":"2026-01-29T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.637926 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.637974 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.637986 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.638006 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.638021 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:49Z","lastTransitionTime":"2026-01-29T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.697076 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.697139 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.697175 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.697356 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.697382 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.697381 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.697455 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.697484 4620 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.697587 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:02:05.697554581 +0000 UTC m=+66.310382226 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.697605 4620 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.697723 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:02:05.697690856 +0000 UTC m=+66.310518701 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.697399 4620 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.697810 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:02:05.697801519 +0000 UTC m=+66.310629404 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.743267 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.743321 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.743365 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.743386 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.743399 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:49Z","lastTransitionTime":"2026-01-29T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.858889 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.859314 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.859328 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.859344 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.859357 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:49Z","lastTransitionTime":"2026-01-29T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.872233 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.872362 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.872498 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:49 crc kubenswrapper[4620]: E0129 15:01:49.872690 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.962732 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.962812 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.962827 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.962849 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:49 crc kubenswrapper[4620]: I0129 15:01:49.962860 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:49Z","lastTransitionTime":"2026-01-29T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.065153 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:02:00.998730054 +0000 UTC Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.065911 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.065956 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.065966 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.065982 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.065992 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:50Z","lastTransitionTime":"2026-01-29T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.168865 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.168905 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.168915 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.168933 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.168949 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:50Z","lastTransitionTime":"2026-01-29T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.271720 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.271828 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.271843 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.271865 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.271880 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:50Z","lastTransitionTime":"2026-01-29T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.376038 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.376080 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.376091 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.376114 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.376126 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:50Z","lastTransitionTime":"2026-01-29T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.478978 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.479029 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.479041 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.479060 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.479073 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:50Z","lastTransitionTime":"2026-01-29T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.580825 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.580861 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.580871 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.580889 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.580901 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:50Z","lastTransitionTime":"2026-01-29T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.683096 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.683163 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.683184 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.683222 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.683252 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:50Z","lastTransitionTime":"2026-01-29T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.785998 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.786062 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.786077 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.786297 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.786315 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:50Z","lastTransitionTime":"2026-01-29T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.871983 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.872010 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:50 crc kubenswrapper[4620]: E0129 15:01:50.872142 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:50 crc kubenswrapper[4620]: E0129 15:01:50.872371 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.889428 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.889483 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.889499 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.889519 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.889535 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:50Z","lastTransitionTime":"2026-01-29T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.895040 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.912604 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.929129 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.949664 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.975352 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.992782 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.993036 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.993115 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.993197 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.993272 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:50Z","lastTransitionTime":"2026-01-29T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:50 crc kubenswrapper[4620]: I0129 15:01:50.995800 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.013372 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.027521 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.044600 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.058705 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.066595 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:23:12.17334165 +0000 UTC Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.072242 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.095908 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.095943 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.095954 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.095972 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.095983 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:51Z","lastTransitionTime":"2026-01-29T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.102540 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.116312 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.129057 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.148212 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.168156 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.199648 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.199747 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.199800 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.199822 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.199834 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:51Z","lastTransitionTime":"2026-01-29T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.206391 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" event={"ID":"f995d57b-b546-4226-83f5-3e2c1becec57","Type":"ContainerDied","Data":"4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307"} Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.206329 4620 generic.go:334] "Generic (PLEG): container finished" podID="f995d57b-b546-4226-83f5-3e2c1becec57" containerID="4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307" exitCode=0 Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.226433 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.255291 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.274874 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.289136 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.303017 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.303053 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.303061 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.303075 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.303085 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:51Z","lastTransitionTime":"2026-01-29T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.307928 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.330172 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.346286 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.361969 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.380286 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.395432 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.406874 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.406923 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.406942 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.406970 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.406985 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:51Z","lastTransitionTime":"2026-01-29T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.412261 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.427673 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.443386 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.458157 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.473447 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.486249 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.510352 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.510395 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.510405 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.510423 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.510435 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:51Z","lastTransitionTime":"2026-01-29T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.613402 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.613448 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.613459 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.613478 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.613489 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:51Z","lastTransitionTime":"2026-01-29T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.716940 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.717013 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.717026 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.717047 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.717067 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:51Z","lastTransitionTime":"2026-01-29T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.820119 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.820165 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.820175 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.820202 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.820217 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:51Z","lastTransitionTime":"2026-01-29T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.872221 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:51 crc kubenswrapper[4620]: E0129 15:01:51.872356 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.872700 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:51 crc kubenswrapper[4620]: E0129 15:01:51.872783 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.924931 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.925646 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.925656 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.925674 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:51 crc kubenswrapper[4620]: I0129 15:01:51.925685 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:51Z","lastTransitionTime":"2026-01-29T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.028383 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.028453 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.028467 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.028490 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.028543 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:52Z","lastTransitionTime":"2026-01-29T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.067289 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 11:47:53.224197842 +0000 UTC Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.131555 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.131637 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.131653 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.131679 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.131697 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:52Z","lastTransitionTime":"2026-01-29T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.215198 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" event={"ID":"f995d57b-b546-4226-83f5-3e2c1becec57","Type":"ContainerStarted","Data":"c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e"} Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.234642 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.234687 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.234699 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.234720 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.234733 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:52Z","lastTransitionTime":"2026-01-29T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.339618 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.339691 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.339705 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.339727 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.339747 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:52Z","lastTransitionTime":"2026-01-29T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.442900 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.442936 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.442945 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.442960 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.442969 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:52Z","lastTransitionTime":"2026-01-29T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.545225 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.545525 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.545592 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.545653 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.545723 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:52Z","lastTransitionTime":"2026-01-29T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.647989 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.648049 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.648058 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.648079 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.648091 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:52Z","lastTransitionTime":"2026-01-29T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.750746 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.750825 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.750839 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.750861 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.750875 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:52Z","lastTransitionTime":"2026-01-29T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.853910 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.853960 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.853972 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.853992 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.854296 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:52Z","lastTransitionTime":"2026-01-29T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.871487 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.871574 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:52 crc kubenswrapper[4620]: E0129 15:01:52.871696 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:52 crc kubenswrapper[4620]: E0129 15:01:52.871888 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.958017 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.958073 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.958089 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.958112 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:52 crc kubenswrapper[4620]: I0129 15:01:52.958134 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:52Z","lastTransitionTime":"2026-01-29T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.061942 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.062161 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.062248 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.062432 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.062632 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:53Z","lastTransitionTime":"2026-01-29T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.068187 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:48:11.24078685 +0000 UTC Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.165489 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.165534 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.165547 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.165565 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.165583 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:53Z","lastTransitionTime":"2026-01-29T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.223268 4620 generic.go:334] "Generic (PLEG): container finished" podID="f995d57b-b546-4226-83f5-3e2c1becec57" containerID="c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e" exitCode=0 Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.223394 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" event={"ID":"f995d57b-b546-4226-83f5-3e2c1becec57","Type":"ContainerDied","Data":"c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.235720 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.235828 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.236271 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.236343 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.236366 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.246244 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.268043 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.268171 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.268386 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.268214 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.268418 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.268427 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.268443 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.268454 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:53Z","lastTransitionTime":"2026-01-29T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.281833 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.294751 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.309772 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.332960 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.349986 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.369525 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.374183 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.374217 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.374229 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.374249 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.374262 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:53Z","lastTransitionTime":"2026-01-29T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.386713 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.405388 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.420517 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.435583 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.453766 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.470117 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.476993 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.477048 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.477072 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.477101 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.477120 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:53Z","lastTransitionTime":"2026-01-29T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.487351 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.501781 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.518840 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.538515 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.559654 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.581098 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.583886 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.583927 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.583941 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.583963 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.583976 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:53Z","lastTransitionTime":"2026-01-29T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.596873 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.620263 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.636133 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.652238 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.667962 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.681452 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.687332 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.687386 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.687403 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.687426 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.687443 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:53Z","lastTransitionTime":"2026-01-29T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.697090 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.715604 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.731844 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.747178 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.751841 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.760742 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.761102 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.781187 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.791470 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.791521 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.791533 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.791557 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.791572 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:53Z","lastTransitionTime":"2026-01-29T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.798986 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.820811 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.837295 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.851740 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.864195 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.872114 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.872161 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:53 crc kubenswrapper[4620]: E0129 15:01:53.872331 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:01:53 crc kubenswrapper[4620]: E0129 15:01:53.872489 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.883800 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.895122 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.895194 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.895209 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.895232 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.895249 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:53Z","lastTransitionTime":"2026-01-29T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.900101 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.916797 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.931105 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.943898 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.954132 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.969303 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.983727 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.997647 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.997709 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.997729 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.997782 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.997796 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:53Z","lastTransitionTime":"2026-01-29T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:53 crc kubenswrapper[4620]: I0129 15:01:53.999656 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.012567 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.023861 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.034805 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.068783 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:06:13.491235507 +0000 UTC Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.100573 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.100614 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.100627 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.100646 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.100658 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.203391 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.203920 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.203932 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.203953 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.203965 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.243267 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" event={"ID":"f995d57b-b546-4226-83f5-3e2c1becec57","Type":"ContainerStarted","Data":"e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca"} Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.263481 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.281189 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.297581 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.311069 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.311116 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.311126 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.311144 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.311154 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.312222 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.324985 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.339040 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.362510 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.379961 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.394528 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.409449 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.414307 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.414338 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.414348 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.414366 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.414376 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.425509 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.440546 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.475366 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.491710 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.505226 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.516877 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.516931 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.516945 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.516965 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.516987 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.519414 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.535036 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.619557 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.619627 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.619637 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.619656 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.619670 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.722409 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.722466 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.722481 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.722502 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.722516 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.755640 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.755700 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.755716 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.755739 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.755774 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:54 crc kubenswrapper[4620]: E0129 15:01:54.775835 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.780443 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.780490 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.780500 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.780515 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.780535 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:54 crc kubenswrapper[4620]: E0129 15:01:54.804391 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.814559 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.814607 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.814620 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.814641 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.814653 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:54 crc kubenswrapper[4620]: E0129 15:01:54.828425 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.833825 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.833856 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.833867 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.833886 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.833897 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:54 crc kubenswrapper[4620]: E0129 15:01:54.851296 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.855360 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.855405 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.855416 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.855433 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.855481 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:54 crc kubenswrapper[4620]: E0129 15:01:54.869154 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:54 crc kubenswrapper[4620]: E0129 15:01:54.869338 4620 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.871242 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.871289 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.871301 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.871323 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.871338 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.871864 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.871922 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:54 crc kubenswrapper[4620]: E0129 15:01:54.872049 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:54 crc kubenswrapper[4620]: E0129 15:01:54.872107 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.974591 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.974656 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.974669 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.974691 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:54 crc kubenswrapper[4620]: I0129 15:01:54.974712 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:54Z","lastTransitionTime":"2026-01-29T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.069195 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:15:42.433140938 +0000 UTC Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.077373 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.077455 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.077471 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.077498 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.077517 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:55Z","lastTransitionTime":"2026-01-29T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.179919 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.179959 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.179968 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.179983 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.179994 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:55Z","lastTransitionTime":"2026-01-29T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.282585 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.282657 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.282678 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.282705 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.282724 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:55Z","lastTransitionTime":"2026-01-29T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.387314 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.387471 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.387500 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.387567 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.387593 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:55Z","lastTransitionTime":"2026-01-29T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.491523 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.491577 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.491591 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.491610 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.491628 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:55Z","lastTransitionTime":"2026-01-29T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.595101 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.595168 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.595183 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.595203 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.595213 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:55Z","lastTransitionTime":"2026-01-29T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.698848 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.698911 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.698924 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.698956 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.698970 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:55Z","lastTransitionTime":"2026-01-29T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.801489 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.801527 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.801539 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.801558 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.801571 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:55Z","lastTransitionTime":"2026-01-29T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.872119 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.872119 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:55 crc kubenswrapper[4620]: E0129 15:01:55.872305 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:55 crc kubenswrapper[4620]: E0129 15:01:55.872469 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.903858 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.903909 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.903918 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.903935 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:55 crc kubenswrapper[4620]: I0129 15:01:55.903947 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:55Z","lastTransitionTime":"2026-01-29T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.006889 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.006971 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.006987 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.007029 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.007045 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:56Z","lastTransitionTime":"2026-01-29T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.069368 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:03:16.299276964 +0000 UTC Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.110164 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.110230 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.110243 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.110279 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.110292 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:56Z","lastTransitionTime":"2026-01-29T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.213213 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.213277 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.213290 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.213309 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.213321 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:56Z","lastTransitionTime":"2026-01-29T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.254897 4620 generic.go:334] "Generic (PLEG): container finished" podID="f995d57b-b546-4226-83f5-3e2c1becec57" containerID="e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca" exitCode=0 Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.254992 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" event={"ID":"f995d57b-b546-4226-83f5-3e2c1becec57","Type":"ContainerDied","Data":"e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca"} Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.272074 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.287191 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.302818 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.315942 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.315970 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.315980 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.315996 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.316006 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:56Z","lastTransitionTime":"2026-01-29T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.319345 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.334184 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.349961 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.370983 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.390170 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.408449 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.419947 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.419999 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.420009 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.420030 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.420042 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:56Z","lastTransitionTime":"2026-01-29T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.425580 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.444982 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.460077 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.472710 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.485722 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.498046 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.511158 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.523534 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.523653 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.523668 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.523709 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.523724 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:56Z","lastTransitionTime":"2026-01-29T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.528083 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.626710 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.626784 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.626794 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.626815 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.626827 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:56Z","lastTransitionTime":"2026-01-29T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.733766 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.733814 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.733829 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.733852 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.733868 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:56Z","lastTransitionTime":"2026-01-29T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.837386 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.837453 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.837486 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.837510 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.837522 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:56Z","lastTransitionTime":"2026-01-29T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.875244 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:56 crc kubenswrapper[4620]: E0129 15:01:56.875424 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.875939 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:56 crc kubenswrapper[4620]: E0129 15:01:56.876002 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.939919 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.939990 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.940000 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.940020 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:56 crc kubenswrapper[4620]: I0129 15:01:56.940036 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:56Z","lastTransitionTime":"2026-01-29T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.042869 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.042914 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.042924 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.042944 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.042954 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:57Z","lastTransitionTime":"2026-01-29T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.069842 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:55:11.541680175 +0000 UTC Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.146380 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.146452 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.146467 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.146491 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.146504 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:57Z","lastTransitionTime":"2026-01-29T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.249321 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.249788 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.249799 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.249819 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.249836 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:57Z","lastTransitionTime":"2026-01-29T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.302155 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:57 crc kubenswrapper[4620]: E0129 15:01:57.302331 4620 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:01:57 crc kubenswrapper[4620]: E0129 15:01:57.302410 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs podName:82634d3f-d985-4384-bd37-426d509d4e57 nodeName:}" failed. No retries permitted until 2026-01-29 15:02:13.302387439 +0000 UTC m=+73.915215084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs") pod "network-metrics-daemon-twqvf" (UID: "82634d3f-d985-4384-bd37-426d509d4e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.353358 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.353404 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.353414 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.353434 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.353446 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:57Z","lastTransitionTime":"2026-01-29T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.457554 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.457607 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.457620 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.457642 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.457654 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:57Z","lastTransitionTime":"2026-01-29T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.560795 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.560857 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.560872 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.560936 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.560957 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:57Z","lastTransitionTime":"2026-01-29T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.664361 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.664432 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.664445 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.664473 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.664516 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:57Z","lastTransitionTime":"2026-01-29T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.767216 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.767271 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.767294 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.767315 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.767330 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:57Z","lastTransitionTime":"2026-01-29T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.870853 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.870953 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.870968 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.870995 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.871010 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:57Z","lastTransitionTime":"2026-01-29T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.871523 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:57 crc kubenswrapper[4620]: E0129 15:01:57.871694 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.871814 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:57 crc kubenswrapper[4620]: E0129 15:01:57.871889 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.973947 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.973994 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.974008 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.974029 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:57 crc kubenswrapper[4620]: I0129 15:01:57.974043 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:57Z","lastTransitionTime":"2026-01-29T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.070151 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 22:09:28.029944448 +0000 UTC Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.076852 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.076914 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.076926 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.076945 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.076954 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:58Z","lastTransitionTime":"2026-01-29T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.179441 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.179495 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.179510 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.179538 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.179557 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:58Z","lastTransitionTime":"2026-01-29T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.264028 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" event={"ID":"f995d57b-b546-4226-83f5-3e2c1becec57","Type":"ContainerStarted","Data":"c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f"} Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.282151 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.282212 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.282225 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.282247 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.282271 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:58Z","lastTransitionTime":"2026-01-29T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.386095 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.386163 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.386177 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.386206 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.386221 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:58Z","lastTransitionTime":"2026-01-29T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.488689 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.488730 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.488744 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.488786 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.488801 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:58Z","lastTransitionTime":"2026-01-29T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.590803 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.590848 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.590860 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.590874 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.590883 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:58Z","lastTransitionTime":"2026-01-29T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.694191 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.694283 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.694297 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.694323 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.694339 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:58Z","lastTransitionTime":"2026-01-29T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.797992 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.798051 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.798062 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.798085 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.798097 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:58Z","lastTransitionTime":"2026-01-29T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.872043 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.872183 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:01:58 crc kubenswrapper[4620]: E0129 15:01:58.872251 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:01:58 crc kubenswrapper[4620]: E0129 15:01:58.872425 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.901182 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.901239 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.901252 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.901276 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:58 crc kubenswrapper[4620]: I0129 15:01:58.901290 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:58Z","lastTransitionTime":"2026-01-29T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.004072 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.004138 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.004148 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.004165 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.004183 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:59Z","lastTransitionTime":"2026-01-29T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.071146 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:07:41.996895589 +0000 UTC Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.108637 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.108703 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.108716 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.108740 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.108771 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:59Z","lastTransitionTime":"2026-01-29T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.212340 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.212397 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.212410 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.212438 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.212452 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:59Z","lastTransitionTime":"2026-01-29T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.271207 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/0.log" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.275143 4620 generic.go:334] "Generic (PLEG): container finished" podID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerID="cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02" exitCode=1 Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.275221 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02"} Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.277585 4620 scope.go:117] "RemoveContainer" containerID="cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.281280 4620 generic.go:334] "Generic (PLEG): container finished" podID="f995d57b-b546-4226-83f5-3e2c1becec57" containerID="c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f" exitCode=0 Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.281335 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" event={"ID":"f995d57b-b546-4226-83f5-3e2c1becec57","Type":"ContainerDied","Data":"c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f"} Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.294304 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.317387 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.317421 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.317431 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.317447 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.317459 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:59Z","lastTransitionTime":"2026-01-29T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.321043 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.333717 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.352553 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.367775 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.383203 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.398177 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.415402 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.422644 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.423110 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.423249 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.423475 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.423600 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:59Z","lastTransitionTime":"2026-01-29T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.434237 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.451357 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.468521 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.488511 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.508598 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.526292 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.527601 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.527629 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.527637 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.527654 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.527664 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:59Z","lastTransitionTime":"2026-01-29T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.542604 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.568327 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"message\\\":\\\"vent handler 6\\\\nI0129 15:01:58.750517 5914 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:01:58.750524 5914 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:01:58.750531 5914 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:01:58.750541 5914 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:01:58.750548 5914 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 15:01:58.750640 5914 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.750923 5914 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.751209 5914 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751587 5914 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751684 5914 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.588449 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.612697 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.630783 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.630829 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.630843 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.630864 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.630879 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:59Z","lastTransitionTime":"2026-01-29T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.639159 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"message\\\":\\\"vent handler 6\\\\nI0129 15:01:58.750517 5914 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:01:58.750524 5914 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:01:58.750531 5914 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:01:58.750541 5914 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:01:58.750548 5914 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 15:01:58.750640 5914 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.750923 5914 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.751209 5914 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751587 5914 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751684 5914 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.658146 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.675847 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.693345 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.712976 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.728626 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.740446 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.740509 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.740523 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.740549 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.740561 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:59Z","lastTransitionTime":"2026-01-29T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.748203 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.765251 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.780200 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.794784 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.810886 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.826535 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.844316 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.844361 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.844375 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.844392 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.844403 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:59Z","lastTransitionTime":"2026-01-29T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.845339 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.863201 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.871716 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.871832 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:01:59 crc kubenswrapper[4620]: E0129 15:01:59.871984 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:01:59 crc kubenswrapper[4620]: E0129 15:01:59.872492 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.880565 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.894018 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:01:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.947171 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.947232 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.947247 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.947271 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:01:59 crc kubenswrapper[4620]: I0129 15:01:59.947293 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:01:59Z","lastTransitionTime":"2026-01-29T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.050441 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.050496 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.050508 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.050527 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.050541 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:00Z","lastTransitionTime":"2026-01-29T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.071694 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:51:14.339658297 +0000 UTC Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.153035 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.153094 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.153105 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.153122 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.153138 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:00Z","lastTransitionTime":"2026-01-29T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.255541 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.255592 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.255604 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.255623 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.255635 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:00Z","lastTransitionTime":"2026-01-29T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.296124 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" event={"ID":"f995d57b-b546-4226-83f5-3e2c1becec57","Type":"ContainerStarted","Data":"8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0"} Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.299854 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/0.log" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.305146 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06"} Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.305987 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.313807 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.333073 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"message\\\":\\\"vent handler 6\\\\nI0129 15:01:58.750517 5914 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:01:58.750524 5914 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:01:58.750531 5914 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:01:58.750541 5914 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:01:58.750548 5914 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 15:01:58.750640 5914 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.750923 5914 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.751209 5914 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751587 5914 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751684 5914 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.347164 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.358979 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.359047 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.359060 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.359083 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.359097 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:00Z","lastTransitionTime":"2026-01-29T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.363292 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.377629 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.392660 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.406138 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.422085 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.438663 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.453562 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.461807 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.461871 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.461887 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.461911 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.461926 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:00Z","lastTransitionTime":"2026-01-29T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.471203 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.489825 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.505652 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.522379 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.537154 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.552946 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.564153 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.564210 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.564294 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.564315 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.564325 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:00Z","lastTransitionTime":"2026-01-29T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.577742 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.598998 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.621804 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.641962 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.658690 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.667241 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.667293 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.667305 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.667327 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.667339 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:00Z","lastTransitionTime":"2026-01-29T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.671643 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.686026 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.706082 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"message\\\":\\\"vent handler 6\\\\nI0129 15:01:58.750517 5914 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:01:58.750524 5914 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:01:58.750531 5914 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:01:58.750541 5914 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:01:58.750548 5914 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 15:01:58.750640 5914 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.750923 5914 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.751209 5914 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751587 5914 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751684 5914 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.720928 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.733845 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.747483 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.763600 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.770647 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.771135 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.771274 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.774494 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.774577 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:00Z","lastTransitionTime":"2026-01-29T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.785201 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.804045 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.823183 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.838962 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.851898 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.870484 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.871636 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.871643 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:00 crc kubenswrapper[4620]: E0129 15:02:00.871762 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:00 crc kubenswrapper[4620]: E0129 15:02:00.871917 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.876240 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.876261 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.876270 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.876285 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.876293 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:00Z","lastTransitionTime":"2026-01-29T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.884469 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.904339 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"message\\\":\\\"vent handler 6\\\\nI0129 15:01:58.750517 5914 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:01:58.750524 5914 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:01:58.750531 5914 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:01:58.750541 5914 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:01:58.750548 5914 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 15:01:58.750640 5914 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.750923 5914 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.751209 5914 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751587 5914 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751684 5914 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.921731 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.937165 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.949382 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.964952 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.978100 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.978130 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.978139 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.978153 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.978163 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:00Z","lastTransitionTime":"2026-01-29T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.980460 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:00 crc kubenswrapper[4620]: I0129 15:02:00.993579 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.011247 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.016198 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.024946 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.038869 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.052897 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.069412 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.071892 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:43:04.770954357 +0000 UTC Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.080574 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.080803 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.080891 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.080982 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.081056 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:01Z","lastTransitionTime":"2026-01-29T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.087180 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.102698 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.114580 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.126727 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.141971 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.152703 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.163588 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.177193 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.182719 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.182764 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.182776 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.182793 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.182803 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:01Z","lastTransitionTime":"2026-01-29T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.187327 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.196716 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.215503 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"message\\\":\\\"vent handler 6\\\\nI0129 15:01:58.750517 5914 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:01:58.750524 5914 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:01:58.750531 5914 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:01:58.750541 5914 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:01:58.750548 5914 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 15:01:58.750640 5914 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.750923 5914 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.751209 5914 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751587 5914 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751684 5914 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.228166 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.242072 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.268020 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.283530 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.285401 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.285448 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.285458 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.285477 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.285489 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:01Z","lastTransitionTime":"2026-01-29T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.296318 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.317988 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.334567 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.349667 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.365819 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.380124 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.388003 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.388037 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.388047 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.388063 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.388074 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:01Z","lastTransitionTime":"2026-01-29T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.492376 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.492434 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.492442 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.492458 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.492470 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:01Z","lastTransitionTime":"2026-01-29T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.594467 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.594522 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.594536 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.594556 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.594954 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:01Z","lastTransitionTime":"2026-01-29T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.698213 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.698270 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.698282 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.698305 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.698317 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:01Z","lastTransitionTime":"2026-01-29T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.801257 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.801294 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.801303 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.801316 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.801324 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:01Z","lastTransitionTime":"2026-01-29T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.872463 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.872515 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:01 crc kubenswrapper[4620]: E0129 15:02:01.872643 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:01 crc kubenswrapper[4620]: E0129 15:02:01.872832 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.903863 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.903888 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.903895 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.903907 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:01 crc kubenswrapper[4620]: I0129 15:02:01.903916 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:01Z","lastTransitionTime":"2026-01-29T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.007814 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.007880 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.007890 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.007912 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.007924 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:02Z","lastTransitionTime":"2026-01-29T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.072874 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:40:37.92842831 +0000 UTC Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.110261 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.110308 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.110318 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.110336 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.110353 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:02Z","lastTransitionTime":"2026-01-29T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.213315 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.213360 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.213371 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.213391 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.213402 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:02Z","lastTransitionTime":"2026-01-29T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.315548 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.315586 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.315594 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.315613 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.315623 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:02Z","lastTransitionTime":"2026-01-29T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.418803 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.418859 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.418872 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.418893 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.418907 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:02Z","lastTransitionTime":"2026-01-29T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.522770 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.522853 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.522867 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.522887 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.522899 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:02Z","lastTransitionTime":"2026-01-29T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.625868 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.625900 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.625907 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.625923 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.625931 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:02Z","lastTransitionTime":"2026-01-29T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.730193 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.730263 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.730282 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.730309 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.730332 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:02Z","lastTransitionTime":"2026-01-29T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.834183 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.834489 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.834502 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.834520 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.834531 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:02Z","lastTransitionTime":"2026-01-29T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.873741 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:02 crc kubenswrapper[4620]: E0129 15:02:02.873867 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.874013 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:02 crc kubenswrapper[4620]: E0129 15:02:02.874093 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.937136 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.937171 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.937181 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.937196 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:02 crc kubenswrapper[4620]: I0129 15:02:02.937207 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:02Z","lastTransitionTime":"2026-01-29T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.040488 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.040547 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.040564 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.040586 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.040602 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:03Z","lastTransitionTime":"2026-01-29T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.073055 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:12:30.796800027 +0000 UTC Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.143603 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.143645 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.143654 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.143667 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.143675 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:03Z","lastTransitionTime":"2026-01-29T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.246889 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.246936 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.246950 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.246969 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.246982 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:03Z","lastTransitionTime":"2026-01-29T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.321568 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/1.log" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.322597 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/0.log" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.326172 4620 generic.go:334] "Generic (PLEG): container finished" podID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerID="17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06" exitCode=1 Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.326390 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06"} Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.326544 4620 scope.go:117] "RemoveContainer" containerID="cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.327698 4620 scope.go:117] "RemoveContainer" containerID="17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06" Jan 29 15:02:03 crc kubenswrapper[4620]: E0129 15:02:03.328150 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.345689 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.350796 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.351045 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.351131 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.351211 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.351286 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:03Z","lastTransitionTime":"2026-01-29T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.359781 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.374286 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.391119 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.404284 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.419545 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.435566 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.451078 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.454108 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.454176 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.454191 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.454216 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.454230 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:03Z","lastTransitionTime":"2026-01-29T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.466594 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.489052 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.504203 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.522223 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.542571 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"message\\\":\\\"vent handler 6\\\\nI0129 15:01:58.750517 5914 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:01:58.750524 5914 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:01:58.750531 5914 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:01:58.750541 5914 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:01:58.750548 5914 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 15:01:58.750640 5914 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.750923 5914 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.751209 5914 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751587 5914 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751684 5914 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:02Z\\\",\\\"message\\\":\\\"ig-operator/machine-config-daemon-7469t after 0 failed attempt(s)\\\\nI0129 15:02:01.135339 6114 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-7469t\\\\nI0129 15:02:01.134178 6114 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 15:02:01.135354 6114 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0129 15:02:01.135353 6114 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:01.135361\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.557128 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.557174 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.557185 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.557203 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.557214 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:03Z","lastTransitionTime":"2026-01-29T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.559780 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.573447 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.587230 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.607855 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.659837 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.659872 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.659881 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.659895 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.659904 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:03Z","lastTransitionTime":"2026-01-29T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.762045 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.762085 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.762093 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.762110 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.762122 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:03Z","lastTransitionTime":"2026-01-29T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.864833 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.864874 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.864889 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.864905 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.864915 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:03Z","lastTransitionTime":"2026-01-29T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.871581 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.871647 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:03 crc kubenswrapper[4620]: E0129 15:02:03.871902 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:03 crc kubenswrapper[4620]: E0129 15:02:03.872030 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.967246 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.967293 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.967305 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.967323 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:03 crc kubenswrapper[4620]: I0129 15:02:03.967335 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:03Z","lastTransitionTime":"2026-01-29T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.070584 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.070641 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.070657 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.070684 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.070701 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:04Z","lastTransitionTime":"2026-01-29T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.074172 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:06:04.677682702 +0000 UTC Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.174069 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.174118 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.174129 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.174153 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.174167 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:04Z","lastTransitionTime":"2026-01-29T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.277868 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.277941 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.277956 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.277984 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.278002 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:04Z","lastTransitionTime":"2026-01-29T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.331555 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/1.log" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.379870 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.379916 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.379925 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.379939 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.379948 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:04Z","lastTransitionTime":"2026-01-29T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.483008 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.483285 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.483376 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.483458 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.483533 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:04Z","lastTransitionTime":"2026-01-29T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.586543 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.586603 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.586616 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.586637 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.586655 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:04Z","lastTransitionTime":"2026-01-29T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.690022 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.690087 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.690105 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.690128 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.690190 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:04Z","lastTransitionTime":"2026-01-29T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.792940 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.792973 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.792988 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.793002 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.793011 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:04Z","lastTransitionTime":"2026-01-29T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.872224 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:04 crc kubenswrapper[4620]: E0129 15:02:04.872363 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.872395 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:04 crc kubenswrapper[4620]: E0129 15:02:04.872534 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.895808 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.895853 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.895865 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.895881 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.896190 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:04Z","lastTransitionTime":"2026-01-29T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.998687 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.998729 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.998750 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.998789 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:04 crc kubenswrapper[4620]: I0129 15:02:04.998802 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:04Z","lastTransitionTime":"2026-01-29T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.076339 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:28:09.73662139 +0000 UTC Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.101103 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.101169 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.101185 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.101206 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.101218 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.199947 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.200143 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.200283 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:02:37.200238439 +0000 UTC m=+97.813066104 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.200403 4620 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.200591 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:02:37.200539097 +0000 UTC m=+97.813366742 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.204087 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.204136 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.204146 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.204200 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.204216 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.208800 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.208829 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.208841 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.208856 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.208867 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.227322 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.232065 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.232097 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.232106 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.232119 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.232129 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.247390 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.252884 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.252952 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.252962 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.252996 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.253009 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.266922 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.271813 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.271887 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.271898 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.271923 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.271952 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.286571 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.290823 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.290962 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.291049 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.291161 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.291284 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.303823 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.304068 4620 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.308506 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.308619 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.308709 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.308793 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.308861 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.411152 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.411209 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.411221 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.411247 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.411260 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.513729 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.516937 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.518490 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.518643 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.518722 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.621370 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.621532 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.621634 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.621729 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.621826 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.704226 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.704485 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.704619 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.704799 4620 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.704916 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:02:37.704900413 +0000 UTC m=+98.317728058 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.705055 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.705119 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.705175 4620 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.705250 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:02:37.705242104 +0000 UTC m=+98.318069749 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.705352 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.705418 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.705470 4620 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.705542 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:02:37.705534293 +0000 UTC m=+98.318361938 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.725441 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.725789 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.725933 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.726074 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.726202 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.828263 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.828296 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.828307 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.828332 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.828343 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.872000 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.872135 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.872176 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:05 crc kubenswrapper[4620]: E0129 15:02:05.872423 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.931622 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.931933 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.932001 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.932078 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:05 crc kubenswrapper[4620]: I0129 15:02:05.932135 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:05Z","lastTransitionTime":"2026-01-29T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.034657 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.034981 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.035065 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.035148 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.035246 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:06Z","lastTransitionTime":"2026-01-29T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.076600 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:44:56.98528009 +0000 UTC Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.137650 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.137892 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.138000 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.138066 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.138146 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:06Z","lastTransitionTime":"2026-01-29T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.243412 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.243460 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.243472 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.243490 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.243506 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:06Z","lastTransitionTime":"2026-01-29T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.347379 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.347414 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.347424 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.347440 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.347450 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:06Z","lastTransitionTime":"2026-01-29T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.450885 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.450932 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.450951 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.450975 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.450992 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:06Z","lastTransitionTime":"2026-01-29T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.553399 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.553438 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.553450 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.553470 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.553482 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:06Z","lastTransitionTime":"2026-01-29T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.656566 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.656628 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.656638 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.656657 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.656668 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:06Z","lastTransitionTime":"2026-01-29T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.759070 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.759108 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.759120 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.759135 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.759147 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:06Z","lastTransitionTime":"2026-01-29T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.862534 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.862587 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.862603 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.862625 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.862640 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:06Z","lastTransitionTime":"2026-01-29T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.871822 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.871736 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:06 crc kubenswrapper[4620]: E0129 15:02:06.872038 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:06 crc kubenswrapper[4620]: E0129 15:02:06.872154 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.967135 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.967530 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.967687 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.967846 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:06 crc kubenswrapper[4620]: I0129 15:02:06.967932 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:06Z","lastTransitionTime":"2026-01-29T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.071079 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.071153 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.071165 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.071192 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.071205 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:07Z","lastTransitionTime":"2026-01-29T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.078322 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:53:36.595901083 +0000 UTC Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.174406 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.174445 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.174455 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.174475 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.174486 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:07Z","lastTransitionTime":"2026-01-29T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.277926 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.277976 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.277988 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.278006 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.278018 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:07Z","lastTransitionTime":"2026-01-29T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.381020 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.381071 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.381089 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.381120 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.381137 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:07Z","lastTransitionTime":"2026-01-29T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.484385 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.484439 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.484449 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.484463 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.484472 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:07Z","lastTransitionTime":"2026-01-29T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.587306 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.587358 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.587374 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.587395 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.587407 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:07Z","lastTransitionTime":"2026-01-29T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.689894 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.690385 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.690475 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.690586 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.690658 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:07Z","lastTransitionTime":"2026-01-29T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.794179 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.794532 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.794596 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.794675 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.794747 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:07Z","lastTransitionTime":"2026-01-29T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.872131 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.872147 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:07 crc kubenswrapper[4620]: E0129 15:02:07.872310 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:07 crc kubenswrapper[4620]: E0129 15:02:07.872425 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.884802 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.897478 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.897540 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.897551 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.897566 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:07 crc kubenswrapper[4620]: I0129 15:02:07.897577 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:07Z","lastTransitionTime":"2026-01-29T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.001005 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.001043 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.001054 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.001070 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.001081 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:08Z","lastTransitionTime":"2026-01-29T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.078787 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:44:55.223367717 +0000 UTC Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.103359 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.103404 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.103413 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.103427 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.103438 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:08Z","lastTransitionTime":"2026-01-29T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.205430 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.205474 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.205486 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.205502 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.205513 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:08Z","lastTransitionTime":"2026-01-29T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.308055 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.308126 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.308142 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.308188 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.308207 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:08Z","lastTransitionTime":"2026-01-29T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.410945 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.410992 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.411002 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.411019 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.411029 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:08Z","lastTransitionTime":"2026-01-29T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.513731 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.513841 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.513919 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.513943 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.513992 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:08Z","lastTransitionTime":"2026-01-29T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.617204 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.617252 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.617264 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.617285 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.617298 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:08Z","lastTransitionTime":"2026-01-29T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.721223 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.721283 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.721297 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.721323 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.721345 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:08Z","lastTransitionTime":"2026-01-29T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.823784 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.823835 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.823845 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.823870 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.823890 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:08Z","lastTransitionTime":"2026-01-29T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.872238 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:08 crc kubenswrapper[4620]: E0129 15:02:08.872503 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.872303 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:08 crc kubenswrapper[4620]: E0129 15:02:08.873685 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.926532 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.926577 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.926586 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.926607 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:08 crc kubenswrapper[4620]: I0129 15:02:08.926619 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:08Z","lastTransitionTime":"2026-01-29T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.030491 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.030541 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.030552 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.030570 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.030582 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:09Z","lastTransitionTime":"2026-01-29T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.079550 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:24:54.1553896 +0000 UTC Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.133842 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.133904 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.133914 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.133941 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.133953 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:09Z","lastTransitionTime":"2026-01-29T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.237330 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.237384 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.237399 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.237422 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.237436 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:09Z","lastTransitionTime":"2026-01-29T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.340247 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.340298 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.340312 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.340330 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.340343 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:09Z","lastTransitionTime":"2026-01-29T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.443361 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.443424 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.443436 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.443451 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.443463 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:09Z","lastTransitionTime":"2026-01-29T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.546171 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.546238 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.546254 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.546278 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.546292 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:09Z","lastTransitionTime":"2026-01-29T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.649301 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.649338 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.649349 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.649364 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.649374 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:09Z","lastTransitionTime":"2026-01-29T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.751503 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.751551 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.751560 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.751572 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.751582 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:09Z","lastTransitionTime":"2026-01-29T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.854103 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.854152 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.854162 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.854178 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.854188 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:09Z","lastTransitionTime":"2026-01-29T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.871484 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:09 crc kubenswrapper[4620]: E0129 15:02:09.871623 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.871483 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:09 crc kubenswrapper[4620]: E0129 15:02:09.871910 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.956833 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.956884 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.956893 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.956908 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:09 crc kubenswrapper[4620]: I0129 15:02:09.956917 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:09Z","lastTransitionTime":"2026-01-29T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.059514 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.059566 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.059583 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.059599 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.059611 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:10Z","lastTransitionTime":"2026-01-29T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.080706 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:22:55.46570528 +0000 UTC Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.161820 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.162287 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.162352 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.162434 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.162507 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:10Z","lastTransitionTime":"2026-01-29T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.264476 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.264522 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.264532 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.264547 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.264556 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:10Z","lastTransitionTime":"2026-01-29T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.368400 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.368455 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.368465 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.368481 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.368492 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:10Z","lastTransitionTime":"2026-01-29T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.470701 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.470991 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.471051 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.471118 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.471185 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:10Z","lastTransitionTime":"2026-01-29T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.574476 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.574521 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.574531 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.574548 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.574559 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:10Z","lastTransitionTime":"2026-01-29T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.678677 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.679098 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.679162 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.679231 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.679308 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:10Z","lastTransitionTime":"2026-01-29T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.782449 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.782934 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.783043 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.783160 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.783310 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:10Z","lastTransitionTime":"2026-01-29T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.872394 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.872410 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:10 crc kubenswrapper[4620]: E0129 15:02:10.872555 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:10 crc kubenswrapper[4620]: E0129 15:02:10.872630 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.885055 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.893102 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.899902 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.910090 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.914637 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.914672 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.914680 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.914696 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.914706 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:10Z","lastTransitionTime":"2026-01-29T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.920597 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.931481 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.941700 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.951356 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.967092 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.977263 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:10 crc kubenswrapper[4620]: I0129 15:02:10.991238 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.001506 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.012004 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.016651 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.016674 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.016683 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.016695 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.016703 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:11Z","lastTransitionTime":"2026-01-29T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.027690 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc99f91f58882c1c64b78b32316b7628eba4705edacb6020cc2a11810887c02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"message\\\":\\\"vent handler 6\\\\nI0129 15:01:58.750517 5914 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:01:58.750524 5914 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:01:58.750531 5914 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15:01:58.750541 5914 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:01:58.750548 5914 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 15:01:58.750640 5914 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.750923 5914 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:01:58.751209 5914 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751587 5914 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:01:58.751684 5914 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:02Z\\\",\\\"message\\\":\\\"ig-operator/machine-config-daemon-7469t after 0 failed attempt(s)\\\\nI0129 15:02:01.135339 6114 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-7469t\\\\nI0129 15:02:01.134178 6114 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 15:02:01.135354 6114 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0129 15:02:01.135353 6114 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:01.135361\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.037413 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.046216 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.057375 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.070122 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.081400 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:57:05.1458628 +0000 UTC Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.129071 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.129125 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.129135 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.129157 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.129166 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:11Z","lastTransitionTime":"2026-01-29T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.232132 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.232168 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.232179 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.232194 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.232205 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:11Z","lastTransitionTime":"2026-01-29T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.334899 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.334949 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.334961 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.334981 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.334994 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:11Z","lastTransitionTime":"2026-01-29T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.437667 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.437705 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.437714 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.437727 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.437736 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:11Z","lastTransitionTime":"2026-01-29T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.540831 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.540862 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.540873 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.540889 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.540900 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:11Z","lastTransitionTime":"2026-01-29T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.643437 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.643483 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.643498 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.643522 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.643540 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:11Z","lastTransitionTime":"2026-01-29T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.746595 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.746655 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.746667 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.746690 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.746705 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:11Z","lastTransitionTime":"2026-01-29T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.849953 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.849985 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.849994 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.850010 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.850020 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:11Z","lastTransitionTime":"2026-01-29T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.871823 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.871900 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:11 crc kubenswrapper[4620]: E0129 15:02:11.871991 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:11 crc kubenswrapper[4620]: E0129 15:02:11.872235 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.953512 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.953561 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.953574 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.953594 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:11 crc kubenswrapper[4620]: I0129 15:02:11.953614 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:11Z","lastTransitionTime":"2026-01-29T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.057124 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.057630 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.057716 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.057809 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.057919 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:12Z","lastTransitionTime":"2026-01-29T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.081709 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:00:02.717671102 +0000 UTC Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.161706 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.161846 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.161859 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.161879 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.161889 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:12Z","lastTransitionTime":"2026-01-29T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.264747 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.264822 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.264836 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.264854 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.264867 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:12Z","lastTransitionTime":"2026-01-29T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.367819 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.367886 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.367900 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.367920 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.367932 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:12Z","lastTransitionTime":"2026-01-29T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.472155 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.472571 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.472710 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.472838 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.472936 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:12Z","lastTransitionTime":"2026-01-29T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.576482 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.576533 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.576546 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.576568 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.576582 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:12Z","lastTransitionTime":"2026-01-29T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.678855 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.678906 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.678917 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.678936 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.678947 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:12Z","lastTransitionTime":"2026-01-29T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.782507 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.782542 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.782551 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.782568 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.782579 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:12Z","lastTransitionTime":"2026-01-29T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.872256 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.872256 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:12 crc kubenswrapper[4620]: E0129 15:02:12.872569 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:12 crc kubenswrapper[4620]: E0129 15:02:12.872472 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.884789 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.884835 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.884848 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.884865 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.884878 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:12Z","lastTransitionTime":"2026-01-29T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.987959 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.988023 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.988035 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.988054 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:12 crc kubenswrapper[4620]: I0129 15:02:12.988065 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:12Z","lastTransitionTime":"2026-01-29T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.081926 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 03:20:45.584575442 +0000 UTC Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.091428 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.091480 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.091494 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.091518 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.091532 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:13Z","lastTransitionTime":"2026-01-29T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.194605 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.194687 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.194698 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.194720 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.194734 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:13Z","lastTransitionTime":"2026-01-29T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.298584 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.298634 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.298646 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.298665 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.298679 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:13Z","lastTransitionTime":"2026-01-29T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.388690 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:13 crc kubenswrapper[4620]: E0129 15:02:13.389016 4620 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:02:13 crc kubenswrapper[4620]: E0129 15:02:13.389185 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs podName:82634d3f-d985-4384-bd37-426d509d4e57 nodeName:}" failed. No retries permitted until 2026-01-29 15:02:45.389144724 +0000 UTC m=+106.001972569 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs") pod "network-metrics-daemon-twqvf" (UID: "82634d3f-d985-4384-bd37-426d509d4e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.402430 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.402483 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.402491 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.402508 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.402519 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:13Z","lastTransitionTime":"2026-01-29T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.505570 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.505628 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.505644 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.505666 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.505681 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:13Z","lastTransitionTime":"2026-01-29T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.608768 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.608843 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.608854 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.608875 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.608885 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:13Z","lastTransitionTime":"2026-01-29T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.712122 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.712182 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.712202 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.712235 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.712254 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:13Z","lastTransitionTime":"2026-01-29T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.815085 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.815143 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.815157 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.815180 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.815200 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:13Z","lastTransitionTime":"2026-01-29T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.871560 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.871560 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:13 crc kubenswrapper[4620]: E0129 15:02:13.872244 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:13 crc kubenswrapper[4620]: E0129 15:02:13.872281 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.920840 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.920906 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.920919 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.920948 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:13 crc kubenswrapper[4620]: I0129 15:02:13.920966 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:13Z","lastTransitionTime":"2026-01-29T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.023701 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.023779 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.023793 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.023820 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.023842 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:14Z","lastTransitionTime":"2026-01-29T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.083147 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:20:44.703047568 +0000 UTC Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.127258 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.127323 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.127337 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.127362 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.127373 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:14Z","lastTransitionTime":"2026-01-29T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.232029 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.232124 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.232139 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.232164 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.232180 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:14Z","lastTransitionTime":"2026-01-29T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.334964 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.335017 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.335031 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.335055 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.335067 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:14Z","lastTransitionTime":"2026-01-29T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.439335 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.439392 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.439402 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.439428 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.439438 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:14Z","lastTransitionTime":"2026-01-29T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.542379 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.542451 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.542468 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.542488 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.542504 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:14Z","lastTransitionTime":"2026-01-29T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.646207 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.646251 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.646260 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.646296 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.646308 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:14Z","lastTransitionTime":"2026-01-29T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.748840 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.748895 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.748911 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.748933 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.748946 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:14Z","lastTransitionTime":"2026-01-29T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.851425 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.851477 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.851489 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.851509 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.851521 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:14Z","lastTransitionTime":"2026-01-29T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.871725 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.871853 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:14 crc kubenswrapper[4620]: E0129 15:02:14.871879 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:14 crc kubenswrapper[4620]: E0129 15:02:14.872053 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.954787 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.954837 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.954850 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.954873 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:14 crc kubenswrapper[4620]: I0129 15:02:14.954888 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:14Z","lastTransitionTime":"2026-01-29T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.058853 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.059380 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.059475 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.059584 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.059662 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.084340 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:17:56.651954376 +0000 UTC Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.163793 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.163839 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.163851 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.163875 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.163893 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.267542 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.267596 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.267612 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.267635 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.267650 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.371411 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.371474 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.371486 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.371540 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.371556 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.444558 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.444615 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.444628 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.444647 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.444663 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: E0129 15:02:15.459563 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.464871 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.465096 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.465174 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.465261 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.465330 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: E0129 15:02:15.482218 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.488664 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.488707 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.488716 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.488734 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.488747 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: E0129 15:02:15.504180 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.510955 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.511010 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.511044 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.511066 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.511082 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: E0129 15:02:15.526047 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.530544 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.530704 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.530794 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.530902 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.530974 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: E0129 15:02:15.549129 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:15 crc kubenswrapper[4620]: E0129 15:02:15.549356 4620 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.551980 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.552034 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.552045 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.552064 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.552076 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.654862 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.654909 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.654920 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.654941 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.654953 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.757859 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.757907 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.757918 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.757942 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.757957 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.861441 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.861504 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.861516 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.861537 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.861554 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.871823 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.871883 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:15 crc kubenswrapper[4620]: E0129 15:02:15.871985 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:15 crc kubenswrapper[4620]: E0129 15:02:15.872082 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.964868 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.964916 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.964930 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.964951 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:15 crc kubenswrapper[4620]: I0129 15:02:15.964970 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:15Z","lastTransitionTime":"2026-01-29T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.067621 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.067676 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.067687 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.067709 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.067720 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:16Z","lastTransitionTime":"2026-01-29T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.085819 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:58:20.439715416 +0000 UTC Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.171099 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.171151 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.171164 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.171184 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.171197 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:16Z","lastTransitionTime":"2026-01-29T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.274399 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.274452 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.274464 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.274499 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.274514 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:16Z","lastTransitionTime":"2026-01-29T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.378188 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.378231 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.378244 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.378261 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.378271 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:16Z","lastTransitionTime":"2026-01-29T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.480819 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.480887 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.480906 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.480933 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.480952 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:16Z","lastTransitionTime":"2026-01-29T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.584034 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.584106 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.584119 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.584148 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.584163 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:16Z","lastTransitionTime":"2026-01-29T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.687035 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.687088 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.687100 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.687124 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.687135 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:16Z","lastTransitionTime":"2026-01-29T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.789869 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.789931 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.789964 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.789985 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.790002 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:16Z","lastTransitionTime":"2026-01-29T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.872134 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:16 crc kubenswrapper[4620]: E0129 15:02:16.872336 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.872635 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:16 crc kubenswrapper[4620]: E0129 15:02:16.872708 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.873729 4620 scope.go:117] "RemoveContainer" containerID="17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.891596 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:16Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.894008 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.894094 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.894129 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.894181 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.894197 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:16Z","lastTransitionTime":"2026-01-29T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.910369 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:16Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.927224 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:16Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.947871 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:16Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.963284 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:16Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.979688 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:16Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.998124 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.998575 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.998658 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.998740 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:16 crc kubenswrapper[4620]: I0129 15:02:16.998863 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:16Z","lastTransitionTime":"2026-01-29T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.007170 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:02Z\\\",\\\"message\\\":\\\"ig-operator/machine-config-daemon-7469t after 0 failed attempt(s)\\\\nI0129 15:02:01.135339 6114 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-7469t\\\\nI0129 15:02:01.134178 6114 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 15:02:01.135354 6114 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0129 15:02:01.135353 6114 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:01.135361\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.026037 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.044526 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.064300 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.081411 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.086175 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:28:01.828889584 +0000 UTC Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.100816 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.102806 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.102856 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.102871 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.102895 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.102913 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:17Z","lastTransitionTime":"2026-01-29T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.120058 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.135452 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.150216 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.161586 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.178066 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.195702 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.206338 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.206378 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.206389 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.206426 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.206438 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:17Z","lastTransitionTime":"2026-01-29T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.310147 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.310194 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.310208 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.310231 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.310245 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:17Z","lastTransitionTime":"2026-01-29T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.391324 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/1.log" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.393568 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe"} Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.395001 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.412536 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.412576 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.412586 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.412604 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.412614 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:17Z","lastTransitionTime":"2026-01-29T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.414947 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.433693 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.451262 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.471116 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.508439 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.515279 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.515350 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.515363 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.515386 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.515399 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:17Z","lastTransitionTime":"2026-01-29T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.528940 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.542844 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.557927 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.571463 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.583320 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.599017 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.612503 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.618599 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.618677 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.618689 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.618709 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.618762 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:17Z","lastTransitionTime":"2026-01-29T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.627486 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.644661 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.664491 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.680936 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.698925 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.721718 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:02Z\\\",\\\"message\\\":\\\"ig-operator/machine-config-daemon-7469t after 0 failed attempt(s)\\\\nI0129 15:02:01.135339 6114 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-7469t\\\\nI0129 15:02:01.134178 6114 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 15:02:01.135354 6114 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0129 15:02:01.135353 6114 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:01.135361\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.721982 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.722010 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.722023 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.722041 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.722051 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:17Z","lastTransitionTime":"2026-01-29T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.825554 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.825615 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.825626 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.825645 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.825656 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:17Z","lastTransitionTime":"2026-01-29T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.872532 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:17 crc kubenswrapper[4620]: E0129 15:02:17.872702 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.872943 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:17 crc kubenswrapper[4620]: E0129 15:02:17.872998 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.929521 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.929579 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.929591 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.929613 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:17 crc kubenswrapper[4620]: I0129 15:02:17.929627 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:17Z","lastTransitionTime":"2026-01-29T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.036484 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.036569 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.036587 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.036620 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.036649 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:18Z","lastTransitionTime":"2026-01-29T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.086473 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:52:55.166767761 +0000 UTC Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.140299 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.140334 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.140342 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.140358 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.140368 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:18Z","lastTransitionTime":"2026-01-29T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.243718 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.243817 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.243828 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.243847 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.243859 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:18Z","lastTransitionTime":"2026-01-29T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.347085 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.347137 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.347150 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.347171 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.347184 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:18Z","lastTransitionTime":"2026-01-29T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.400559 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/2.log" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.401956 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/1.log" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.404881 4620 generic.go:334] "Generic (PLEG): container finished" podID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerID="95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe" exitCode=1 Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.404950 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe"} Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.405010 4620 scope.go:117] "RemoveContainer" containerID="17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.408017 4620 scope.go:117] "RemoveContainer" containerID="95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe" Jan 29 15:02:18 crc kubenswrapper[4620]: E0129 15:02:18.408409 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.427537 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.444530 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.451795 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.451979 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.452001 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.452053 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.452071 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:18Z","lastTransitionTime":"2026-01-29T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.466040 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.486226 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.503118 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.516997 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.533800 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.552168 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.555812 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.555885 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.555907 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.555995 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.556015 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:18Z","lastTransitionTime":"2026-01-29T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.571262 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.586578 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.606201 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.624966 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.639842 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.658803 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.658849 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.658878 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.658898 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.658909 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:18Z","lastTransitionTime":"2026-01-29T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.662325 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17a7148615c4fa6d4751da88c28ed7a960515bc7704f157ed50f53869807fa06\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:02Z\\\",\\\"message\\\":\\\"ig-operator/machine-config-daemon-7469t after 0 failed attempt(s)\\\\nI0129 15:02:01.135339 6114 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-7469t\\\\nI0129 15:02:01.134178 6114 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0129 15:02:01.135354 6114 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0129 15:02:01.135353 6114 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:01Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:01.135361\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:18Z\\\",\\\"message\\\":\\\"olicy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:18.145659 6430 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, bui\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.678587 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.696328 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.715176 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.735319 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.762343 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.762378 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.762387 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.762406 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.762418 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:18Z","lastTransitionTime":"2026-01-29T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.865586 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.865641 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.865651 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.865672 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.865686 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:18Z","lastTransitionTime":"2026-01-29T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.872011 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.872158 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:18 crc kubenswrapper[4620]: E0129 15:02:18.872202 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:18 crc kubenswrapper[4620]: E0129 15:02:18.872369 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.968425 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.968473 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.968485 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.968504 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:18 crc kubenswrapper[4620]: I0129 15:02:18.968517 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:18Z","lastTransitionTime":"2026-01-29T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.072377 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.072442 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.072453 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.072476 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.072488 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:19Z","lastTransitionTime":"2026-01-29T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.086780 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:36:40.408560676 +0000 UTC Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.175661 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.175913 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.176015 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.176108 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.176194 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:19Z","lastTransitionTime":"2026-01-29T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.281238 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.281919 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.281932 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.281957 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.281974 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:19Z","lastTransitionTime":"2026-01-29T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.384911 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.384952 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.384962 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.384979 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.384994 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:19Z","lastTransitionTime":"2026-01-29T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.410931 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/2.log" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.414277 4620 scope.go:117] "RemoveContainer" containerID="95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe" Jan 29 15:02:19 crc kubenswrapper[4620]: E0129 15:02:19.414656 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.431988 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.453548 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.469219 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.482840 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.488339 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.488384 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.488396 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.488418 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.488431 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:19Z","lastTransitionTime":"2026-01-29T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.501937 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.522564 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.539963 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.565526 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:18Z\\\",\\\"message\\\":\\\"olicy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:18.145659 6430 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, bui\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.580962 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.591543 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.591591 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.591604 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.591627 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.591646 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:19Z","lastTransitionTime":"2026-01-29T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.602627 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.619402 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.635362 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.649474 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.663233 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.680808 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.694168 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.694222 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.694260 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.694282 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.694295 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:19Z","lastTransitionTime":"2026-01-29T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.701490 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.720429 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.740436 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:19Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.797332 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.797893 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.797997 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.798091 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.798203 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:19Z","lastTransitionTime":"2026-01-29T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.872237 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:19 crc kubenswrapper[4620]: E0129 15:02:19.872434 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.872715 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:19 crc kubenswrapper[4620]: E0129 15:02:19.872815 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.901828 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.901914 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.901929 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.901977 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:19 crc kubenswrapper[4620]: I0129 15:02:19.901999 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:19Z","lastTransitionTime":"2026-01-29T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.005668 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.005721 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.005735 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.005781 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.005797 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:20Z","lastTransitionTime":"2026-01-29T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.088038 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 15:13:23.864288196 +0000 UTC Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.109321 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.109407 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.109421 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.109445 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.109460 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:20Z","lastTransitionTime":"2026-01-29T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.213880 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.213936 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.213948 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.213971 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.213984 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:20Z","lastTransitionTime":"2026-01-29T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.317053 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.317504 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.317613 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.317729 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.317908 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:20Z","lastTransitionTime":"2026-01-29T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.421225 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.421633 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.421712 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.421828 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.421912 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:20Z","lastTransitionTime":"2026-01-29T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.525048 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.525143 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.525157 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.525183 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.525199 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:20Z","lastTransitionTime":"2026-01-29T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.627862 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.627909 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.627919 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.627937 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.627948 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:20Z","lastTransitionTime":"2026-01-29T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.731339 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.731717 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.731812 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.731902 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.731965 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:20Z","lastTransitionTime":"2026-01-29T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.835510 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.835578 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.835592 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.835616 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.835630 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:20Z","lastTransitionTime":"2026-01-29T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.872240 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.872286 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:20 crc kubenswrapper[4620]: E0129 15:02:20.872434 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:20 crc kubenswrapper[4620]: E0129 15:02:20.872561 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.890740 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:20Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.910778 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:20Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.928709 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:20Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.939015 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.939066 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.939077 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.939097 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.939114 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:20Z","lastTransitionTime":"2026-01-29T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.950556 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:20Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.966312 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:20Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.983013 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:20Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:20 crc kubenswrapper[4620]: I0129 15:02:20.996735 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:20Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.014300 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:21Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.031288 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:21Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.043694 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.043767 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.043779 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.043800 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.043817 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:21Z","lastTransitionTime":"2026-01-29T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.047523 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:21Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.063005 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:21Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.078156 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:21Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.090022 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:10:19.242941944 +0000 UTC Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.097232 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:21Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.122122 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:18Z\\\",\\\"message\\\":\\\"olicy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:18.145659 6430 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, bui\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:21Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.141715 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:21Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.147471 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.147886 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.148008 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.148108 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.148195 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:21Z","lastTransitionTime":"2026-01-29T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.160351 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:21Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.178210 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:21Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.195596 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:21Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.251640 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.251717 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.251735 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.252113 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.252150 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:21Z","lastTransitionTime":"2026-01-29T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.355800 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.355852 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.355869 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.355897 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.355916 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:21Z","lastTransitionTime":"2026-01-29T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.459780 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.459820 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.459833 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.459854 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.459868 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:21Z","lastTransitionTime":"2026-01-29T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.563540 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.563960 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.564042 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.564123 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.564206 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:21Z","lastTransitionTime":"2026-01-29T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.668910 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.668970 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.668983 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.669003 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.669016 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:21Z","lastTransitionTime":"2026-01-29T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.773251 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.773884 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.773922 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.773950 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.773967 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:21Z","lastTransitionTime":"2026-01-29T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.871589 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:21 crc kubenswrapper[4620]: E0129 15:02:21.871803 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.872081 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:21 crc kubenswrapper[4620]: E0129 15:02:21.872293 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.876718 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.876777 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.876797 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.876851 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.876864 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:21Z","lastTransitionTime":"2026-01-29T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.979796 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.979847 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.979858 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.979878 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:21 crc kubenswrapper[4620]: I0129 15:02:21.979890 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:21Z","lastTransitionTime":"2026-01-29T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.082553 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.082630 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.082643 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.082668 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.082683 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:22Z","lastTransitionTime":"2026-01-29T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.093139 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:55:48.215294119 +0000 UTC Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.185582 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.185637 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.185654 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.185676 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.185691 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:22Z","lastTransitionTime":"2026-01-29T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.288871 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.288937 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.288956 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.288978 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.288990 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:22Z","lastTransitionTime":"2026-01-29T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.392090 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.392160 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.392176 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.392198 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.392213 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:22Z","lastTransitionTime":"2026-01-29T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.496932 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.496988 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.497002 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.497023 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.497041 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:22Z","lastTransitionTime":"2026-01-29T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.600888 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.600943 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.600961 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.600979 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.600992 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:22Z","lastTransitionTime":"2026-01-29T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.704656 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.704732 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.704747 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.704793 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.704812 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:22Z","lastTransitionTime":"2026-01-29T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.809426 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.809495 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.809509 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.809538 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.809556 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:22Z","lastTransitionTime":"2026-01-29T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.872212 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.872242 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:22 crc kubenswrapper[4620]: E0129 15:02:22.872424 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:22 crc kubenswrapper[4620]: E0129 15:02:22.872501 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.918314 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.918719 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.918816 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.918906 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:22 crc kubenswrapper[4620]: I0129 15:02:22.918981 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:22Z","lastTransitionTime":"2026-01-29T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.022636 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.022700 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.022714 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.022736 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.022778 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:23Z","lastTransitionTime":"2026-01-29T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.093554 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:10:26.741949323 +0000 UTC Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.125891 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.125940 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.125950 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.125972 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.125985 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:23Z","lastTransitionTime":"2026-01-29T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.229258 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.229319 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.229331 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.229353 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.229368 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:23Z","lastTransitionTime":"2026-01-29T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.333005 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.333468 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.333604 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.333712 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.333828 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:23Z","lastTransitionTime":"2026-01-29T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.437052 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.437097 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.437105 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.437123 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.437136 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:23Z","lastTransitionTime":"2026-01-29T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.540809 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.540897 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.540913 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.540940 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.540992 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:23Z","lastTransitionTime":"2026-01-29T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.644957 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.645042 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.645058 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.645083 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.645098 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:23Z","lastTransitionTime":"2026-01-29T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.751189 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.751264 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.751280 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.751307 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.751327 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:23Z","lastTransitionTime":"2026-01-29T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.854280 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.854334 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.854345 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.854364 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.854381 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:23Z","lastTransitionTime":"2026-01-29T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.871688 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.871818 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:23 crc kubenswrapper[4620]: E0129 15:02:23.872035 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:23 crc kubenswrapper[4620]: E0129 15:02:23.872165 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.957539 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.957603 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.957616 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.957638 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:23 crc kubenswrapper[4620]: I0129 15:02:23.957654 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:23Z","lastTransitionTime":"2026-01-29T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.061237 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.061298 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.061330 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.061351 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.061361 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:24Z","lastTransitionTime":"2026-01-29T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.094299 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:05:24.241412366 +0000 UTC Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.163925 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.163970 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.163982 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.164000 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.164014 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:24Z","lastTransitionTime":"2026-01-29T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.267732 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.267804 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.267814 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.267833 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.267846 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:24Z","lastTransitionTime":"2026-01-29T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.370776 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.370831 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.370844 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.370865 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.370879 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:24Z","lastTransitionTime":"2026-01-29T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.475218 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.475294 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.475310 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.475334 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.475348 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:24Z","lastTransitionTime":"2026-01-29T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.579291 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.579351 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.579363 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.579386 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.579398 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:24Z","lastTransitionTime":"2026-01-29T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.684130 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.684667 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.684903 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.685023 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.685169 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:24Z","lastTransitionTime":"2026-01-29T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.789094 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.789140 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.789148 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.789167 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.789182 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:24Z","lastTransitionTime":"2026-01-29T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.872450 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:24 crc kubenswrapper[4620]: E0129 15:02:24.872669 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.872468 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:24 crc kubenswrapper[4620]: E0129 15:02:24.873075 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.891929 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.891995 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.892008 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.892028 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.892042 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:24Z","lastTransitionTime":"2026-01-29T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.995235 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.995293 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.995305 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.995326 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:24 crc kubenswrapper[4620]: I0129 15:02:24.995338 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:24Z","lastTransitionTime":"2026-01-29T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.095154 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:37:49.274983222 +0000 UTC Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.099022 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.099370 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.099501 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.099626 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.099707 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:25Z","lastTransitionTime":"2026-01-29T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.204338 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.204433 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.204449 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.204476 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.204515 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:25Z","lastTransitionTime":"2026-01-29T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.308253 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.308778 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.309082 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.309315 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.309525 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:25Z","lastTransitionTime":"2026-01-29T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.413258 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.413327 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.413337 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.413360 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.413372 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:25Z","lastTransitionTime":"2026-01-29T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.516939 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.516986 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.517001 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.517023 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.517036 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:25Z","lastTransitionTime":"2026-01-29T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.621345 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.621402 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.621415 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.621439 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.621455 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:25Z","lastTransitionTime":"2026-01-29T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.725183 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.725271 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.725287 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.725310 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.725328 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:25Z","lastTransitionTime":"2026-01-29T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.829375 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.829841 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.829956 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.830041 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.830134 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:25Z","lastTransitionTime":"2026-01-29T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.872263 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:25 crc kubenswrapper[4620]: E0129 15:02:25.872459 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.872668 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:25 crc kubenswrapper[4620]: E0129 15:02:25.872788 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.906270 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.906334 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.906346 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.906371 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.906387 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:25Z","lastTransitionTime":"2026-01-29T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:25 crc kubenswrapper[4620]: E0129 15:02:25.924481 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:25Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.929134 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.929168 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.929179 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.929199 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.929213 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:25Z","lastTransitionTime":"2026-01-29T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:25 crc kubenswrapper[4620]: E0129 15:02:25.945421 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:25Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.951059 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.951101 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.951113 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.951131 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.951144 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:25Z","lastTransitionTime":"2026-01-29T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:25 crc kubenswrapper[4620]: E0129 15:02:25.966715 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:25Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.972980 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.973013 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.973022 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.973042 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.973053 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:25Z","lastTransitionTime":"2026-01-29T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:25 crc kubenswrapper[4620]: E0129 15:02:25.989301 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:25Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.995221 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.995274 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.995283 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.995303 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:25 crc kubenswrapper[4620]: I0129 15:02:25.995314 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:25Z","lastTransitionTime":"2026-01-29T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:26 crc kubenswrapper[4620]: E0129 15:02:26.012959 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:26 crc kubenswrapper[4620]: E0129 15:02:26.013124 4620 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.017368 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.017715 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.017819 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.017918 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.017986 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:26Z","lastTransitionTime":"2026-01-29T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.096403 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:10:27.007720079 +0000 UTC Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.121866 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.121927 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.121941 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.121965 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.121980 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:26Z","lastTransitionTime":"2026-01-29T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.225809 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.226309 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.226479 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.226590 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.226697 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:26Z","lastTransitionTime":"2026-01-29T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.331566 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.331652 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.331662 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.331687 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.331702 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:26Z","lastTransitionTime":"2026-01-29T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.437188 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.437664 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.437776 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.437990 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.438102 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:26Z","lastTransitionTime":"2026-01-29T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.541026 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.541088 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.541101 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.541155 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.541167 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:26Z","lastTransitionTime":"2026-01-29T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.644948 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.645031 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.645068 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.645096 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.645109 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:26Z","lastTransitionTime":"2026-01-29T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.748189 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.748253 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.748267 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.748290 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.748303 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:26Z","lastTransitionTime":"2026-01-29T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.852116 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.852635 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.852725 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.852865 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.852942 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:26Z","lastTransitionTime":"2026-01-29T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.871802 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:26 crc kubenswrapper[4620]: E0129 15:02:26.872073 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.872400 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:26 crc kubenswrapper[4620]: E0129 15:02:26.872520 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.957196 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.957250 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.957261 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.957282 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:26 crc kubenswrapper[4620]: I0129 15:02:26.957296 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:26Z","lastTransitionTime":"2026-01-29T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.061030 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.061107 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.061124 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.061150 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.061169 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:27Z","lastTransitionTime":"2026-01-29T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.096672 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:32:55.506391912 +0000 UTC Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.165007 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.165059 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.165074 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.165098 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.165114 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:27Z","lastTransitionTime":"2026-01-29T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.269124 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.269379 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.269396 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.269430 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.269448 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:27Z","lastTransitionTime":"2026-01-29T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.373103 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.373163 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.373180 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.373204 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.373219 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:27Z","lastTransitionTime":"2026-01-29T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.478478 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.478548 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.478562 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.478585 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.478599 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:27Z","lastTransitionTime":"2026-01-29T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.581919 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.581983 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.581997 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.582023 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.582039 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:27Z","lastTransitionTime":"2026-01-29T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.686408 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.686488 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.686508 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.686532 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.686551 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:27Z","lastTransitionTime":"2026-01-29T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.790338 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.790397 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.790415 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.790441 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.790455 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:27Z","lastTransitionTime":"2026-01-29T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.872296 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.872450 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:27 crc kubenswrapper[4620]: E0129 15:02:27.872508 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:27 crc kubenswrapper[4620]: E0129 15:02:27.872691 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.895135 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.895187 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.895196 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.895216 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.895226 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:27Z","lastTransitionTime":"2026-01-29T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.998657 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.998711 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.998721 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.998744 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:27 crc kubenswrapper[4620]: I0129 15:02:27.998777 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:27Z","lastTransitionTime":"2026-01-29T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.097438 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:20:27.854321754 +0000 UTC Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.101992 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.102048 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.102057 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.102081 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.102096 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:28Z","lastTransitionTime":"2026-01-29T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.206572 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.206837 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.206852 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.206876 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.206891 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:28Z","lastTransitionTime":"2026-01-29T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.310250 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.310301 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.310312 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.310336 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.310350 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:28Z","lastTransitionTime":"2026-01-29T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.413375 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.413429 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.413441 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.413460 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.413473 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:28Z","lastTransitionTime":"2026-01-29T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.516428 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.516479 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.516495 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.516516 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.516529 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:28Z","lastTransitionTime":"2026-01-29T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.619911 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.619962 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.619972 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.619995 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.620007 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:28Z","lastTransitionTime":"2026-01-29T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.722414 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.722462 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.722471 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.722487 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.722500 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:28Z","lastTransitionTime":"2026-01-29T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.825963 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.826042 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.826054 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.826074 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.826107 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:28Z","lastTransitionTime":"2026-01-29T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.871873 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.871879 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:28 crc kubenswrapper[4620]: E0129 15:02:28.872093 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:28 crc kubenswrapper[4620]: E0129 15:02:28.872224 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.928976 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.929041 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.929061 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.929086 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:28 crc kubenswrapper[4620]: I0129 15:02:28.929113 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:28Z","lastTransitionTime":"2026-01-29T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.032196 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.032260 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.032280 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.032299 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.032313 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:29Z","lastTransitionTime":"2026-01-29T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.098273 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:35:12.117187917 +0000 UTC Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.136271 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.136339 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.136351 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.136373 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.136386 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:29Z","lastTransitionTime":"2026-01-29T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.239989 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.240045 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.240059 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.240077 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.240088 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:29Z","lastTransitionTime":"2026-01-29T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.343648 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.343699 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.343715 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.343736 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.343749 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:29Z","lastTransitionTime":"2026-01-29T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.447737 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.447899 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.447914 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.447933 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.447945 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:29Z","lastTransitionTime":"2026-01-29T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.551222 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.551282 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.551296 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.551318 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.551332 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:29Z","lastTransitionTime":"2026-01-29T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.655604 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.655659 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.655677 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.655707 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.655727 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:29Z","lastTransitionTime":"2026-01-29T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.759389 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.759444 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.759455 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.759476 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.759496 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:29Z","lastTransitionTime":"2026-01-29T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.861917 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.861960 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.861972 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.861990 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.862003 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:29Z","lastTransitionTime":"2026-01-29T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.871849 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:29 crc kubenswrapper[4620]: E0129 15:02:29.871954 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.872023 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:29 crc kubenswrapper[4620]: E0129 15:02:29.872161 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.964853 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.964893 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.964902 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.964914 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:29 crc kubenswrapper[4620]: I0129 15:02:29.964924 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:29Z","lastTransitionTime":"2026-01-29T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.068914 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.069006 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.069022 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.069052 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.069069 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:30Z","lastTransitionTime":"2026-01-29T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.099259 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:15:33.64501406 +0000 UTC Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.172832 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.172882 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.172891 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.172909 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.172920 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:30Z","lastTransitionTime":"2026-01-29T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.275853 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.275902 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.275915 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.275937 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.275953 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:30Z","lastTransitionTime":"2026-01-29T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.381064 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.381139 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.381229 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.381261 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.381279 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:30Z","lastTransitionTime":"2026-01-29T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.484237 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.484303 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.484324 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.484353 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.484373 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:30Z","lastTransitionTime":"2026-01-29T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.587743 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.587823 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.587836 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.587855 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.587867 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:30Z","lastTransitionTime":"2026-01-29T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.691309 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.691341 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.691349 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.691363 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.691373 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:30Z","lastTransitionTime":"2026-01-29T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.794957 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.795015 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.795032 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.795056 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.795072 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:30Z","lastTransitionTime":"2026-01-29T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.872228 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.872393 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:30 crc kubenswrapper[4620]: E0129 15:02:30.872447 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:30 crc kubenswrapper[4620]: E0129 15:02:30.872609 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.888923 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.898800 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.899110 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.899175 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.899247 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.899308 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:30Z","lastTransitionTime":"2026-01-29T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.903736 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.921241 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.933810 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.950272 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.964968 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.979159 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:30 crc kubenswrapper[4620]: I0129 15:02:30.994295 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.002551 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.002585 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.002599 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.002615 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.002626 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:31Z","lastTransitionTime":"2026-01-29T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.010500 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.026800 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.042846 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.055309 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.069233 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.089797 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:18Z\\\",\\\"message\\\":\\\"olicy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:18.145659 6430 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, bui\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.099973 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 09:30:59.647764603 +0000 UTC Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.105691 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.105742 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.105772 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.105798 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.105814 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:31Z","lastTransitionTime":"2026-01-29T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.106045 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.120567 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.132450 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.150104 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.208732 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.208831 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.208851 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.208878 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.208894 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:31Z","lastTransitionTime":"2026-01-29T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.312456 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.312530 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.312588 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.312609 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.312625 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:31Z","lastTransitionTime":"2026-01-29T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.415050 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.415122 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.415133 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.415150 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.415160 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:31Z","lastTransitionTime":"2026-01-29T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.517691 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.517777 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.517800 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.517830 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.518006 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:31Z","lastTransitionTime":"2026-01-29T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.620885 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.620959 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.620982 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.621012 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.621030 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:31Z","lastTransitionTime":"2026-01-29T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.723368 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.723414 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.723431 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.723452 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.723465 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:31Z","lastTransitionTime":"2026-01-29T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.826196 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.826232 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.826241 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.826253 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.826262 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:31Z","lastTransitionTime":"2026-01-29T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.872302 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.872302 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:31 crc kubenswrapper[4620]: E0129 15:02:31.872526 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:31 crc kubenswrapper[4620]: E0129 15:02:31.872433 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.929031 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.929060 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.929068 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.929080 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:31 crc kubenswrapper[4620]: I0129 15:02:31.929089 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:31Z","lastTransitionTime":"2026-01-29T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.032552 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.032614 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.032626 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.032644 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.032664 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:32Z","lastTransitionTime":"2026-01-29T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.100457 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:19:33.004085459 +0000 UTC Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.134941 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.134990 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.135004 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.135023 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.135036 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:32Z","lastTransitionTime":"2026-01-29T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.238165 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.238258 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.238272 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.238293 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.238308 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:32Z","lastTransitionTime":"2026-01-29T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.342059 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.342115 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.342127 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.342150 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.342163 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:32Z","lastTransitionTime":"2026-01-29T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.445857 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.445910 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.445921 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.445948 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.445961 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:32Z","lastTransitionTime":"2026-01-29T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.548740 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.548834 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.548849 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.548869 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.548883 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:32Z","lastTransitionTime":"2026-01-29T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.651267 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.651315 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.651326 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.651345 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.651359 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:32Z","lastTransitionTime":"2026-01-29T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.754195 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.754238 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.754252 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.754275 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.754288 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:32Z","lastTransitionTime":"2026-01-29T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.858470 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.858534 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.858554 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.858579 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.858598 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:32Z","lastTransitionTime":"2026-01-29T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.875703 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:32 crc kubenswrapper[4620]: E0129 15:02:32.875963 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.876411 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:32 crc kubenswrapper[4620]: E0129 15:02:32.876531 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.962013 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.962068 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.962081 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.962104 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:32 crc kubenswrapper[4620]: I0129 15:02:32.962118 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:32Z","lastTransitionTime":"2026-01-29T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.065671 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.065716 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.065729 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.065748 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.065777 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:33Z","lastTransitionTime":"2026-01-29T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.100952 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:01:27.101622358 +0000 UTC Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.169498 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.169566 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.169578 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.169599 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.169614 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:33Z","lastTransitionTime":"2026-01-29T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.272882 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.272945 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.272959 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.272983 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.272996 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:33Z","lastTransitionTime":"2026-01-29T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.376661 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.376707 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.376717 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.376740 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.376768 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:33Z","lastTransitionTime":"2026-01-29T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.481185 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.481228 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.481240 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.481258 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.481271 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:33Z","lastTransitionTime":"2026-01-29T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.583419 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.583451 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.583459 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.583475 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.583484 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:33Z","lastTransitionTime":"2026-01-29T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.687013 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.687070 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.687084 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.687105 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.687123 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:33Z","lastTransitionTime":"2026-01-29T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.790678 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.790724 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.790739 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.790777 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.790789 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:33Z","lastTransitionTime":"2026-01-29T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.871817 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.871916 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:33 crc kubenswrapper[4620]: E0129 15:02:33.872092 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:33 crc kubenswrapper[4620]: E0129 15:02:33.872206 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.893778 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.893834 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.893844 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.893867 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.893879 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:33Z","lastTransitionTime":"2026-01-29T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.997383 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.997434 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.997445 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.997465 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:33 crc kubenswrapper[4620]: I0129 15:02:33.997478 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:33Z","lastTransitionTime":"2026-01-29T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.099852 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.099922 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.099934 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.099961 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.099975 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:34Z","lastTransitionTime":"2026-01-29T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.102014 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 00:31:36.597175725 +0000 UTC Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.204170 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.204251 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.204265 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.204288 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.204306 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:34Z","lastTransitionTime":"2026-01-29T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.308007 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.308053 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.308068 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.308091 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.308105 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:34Z","lastTransitionTime":"2026-01-29T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.412236 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.412328 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.412346 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.412373 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.412387 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:34Z","lastTransitionTime":"2026-01-29T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.516620 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.516703 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.516717 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.516789 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.516806 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:34Z","lastTransitionTime":"2026-01-29T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.619914 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.619959 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.619969 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.620002 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.620014 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:34Z","lastTransitionTime":"2026-01-29T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.724164 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.724216 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.724229 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.724248 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.724261 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:34Z","lastTransitionTime":"2026-01-29T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.828239 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.828331 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.828346 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.828375 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.828388 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:34Z","lastTransitionTime":"2026-01-29T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.872396 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:34 crc kubenswrapper[4620]: E0129 15:02:34.872608 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.872876 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:34 crc kubenswrapper[4620]: E0129 15:02:34.873291 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.873725 4620 scope.go:117] "RemoveContainer" containerID="95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe" Jan 29 15:02:34 crc kubenswrapper[4620]: E0129 15:02:34.873967 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.931777 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.932173 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.932260 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.932373 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:34 crc kubenswrapper[4620]: I0129 15:02:34.932470 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:34Z","lastTransitionTime":"2026-01-29T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.035470 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.035559 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.035574 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.035599 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.035643 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:35Z","lastTransitionTime":"2026-01-29T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.102730 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 13:14:56.408348716 +0000 UTC Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.139307 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.139350 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.139364 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.139382 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.139395 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:35Z","lastTransitionTime":"2026-01-29T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.243352 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.243419 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.243433 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.243460 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.243474 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:35Z","lastTransitionTime":"2026-01-29T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.347225 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.347270 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.347280 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.347301 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.347312 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:35Z","lastTransitionTime":"2026-01-29T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.450245 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.450294 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.450307 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.450328 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.450339 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:35Z","lastTransitionTime":"2026-01-29T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.552832 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.553110 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.553182 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.553269 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.553352 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:35Z","lastTransitionTime":"2026-01-29T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.656420 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.656463 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.656475 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.656498 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.656513 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:35Z","lastTransitionTime":"2026-01-29T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.759905 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.760365 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.760468 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.760562 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.760665 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:35Z","lastTransitionTime":"2026-01-29T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.863819 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.863866 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.863877 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.863897 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.863909 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:35Z","lastTransitionTime":"2026-01-29T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.872391 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.872453 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:35 crc kubenswrapper[4620]: E0129 15:02:35.872644 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:35 crc kubenswrapper[4620]: E0129 15:02:35.872816 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.966628 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.966679 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.966690 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.966709 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:35 crc kubenswrapper[4620]: I0129 15:02:35.966721 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:35Z","lastTransitionTime":"2026-01-29T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.069683 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.069737 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.069769 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.069795 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.069809 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.102962 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:02:31.394022198 +0000 UTC Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.172983 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.173040 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.173054 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.173077 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.173092 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.276666 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.276709 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.276722 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.276741 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.276771 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.380149 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.380196 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.380208 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.380229 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.380246 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.408418 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.408473 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.408485 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.408511 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.408523 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: E0129 15:02:36.424020 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:36Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.428791 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.428830 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.428840 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.428867 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.428880 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: E0129 15:02:36.441596 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:36Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.447061 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.447109 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.447124 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.447148 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.447166 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: E0129 15:02:36.461842 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:36Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.467878 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.467945 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.467957 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.467978 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.468004 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: E0129 15:02:36.488642 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:36Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.492898 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.492945 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.492959 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.492981 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.492997 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: E0129 15:02:36.507083 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:36Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:36 crc kubenswrapper[4620]: E0129 15:02:36.507223 4620 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.510374 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.510569 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.511023 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.511109 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.511349 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.614939 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.615034 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.615050 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.615473 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.615497 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.718584 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.718621 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.718629 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.718650 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.718660 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.821250 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.821800 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.821813 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.821835 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.821848 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.872211 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:36 crc kubenswrapper[4620]: E0129 15:02:36.872445 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.872214 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:36 crc kubenswrapper[4620]: E0129 15:02:36.872774 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.925432 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.925901 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.925985 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.926239 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:36 crc kubenswrapper[4620]: I0129 15:02:36.926307 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:36Z","lastTransitionTime":"2026-01-29T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.029467 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.029525 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.029566 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.029589 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.029606 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:37Z","lastTransitionTime":"2026-01-29T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.104216 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:46:44.275385803 +0000 UTC Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.132701 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.132790 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.132802 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.132831 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.132848 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:37Z","lastTransitionTime":"2026-01-29T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.205336 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.205628 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:41.205568317 +0000 UTC m=+161.818395972 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.205851 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.205994 4620 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.206088 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:03:41.206074533 +0000 UTC m=+161.818902348 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.235792 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.236195 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.236269 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.236370 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.236446 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:37Z","lastTransitionTime":"2026-01-29T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.339861 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.339907 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.339918 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.339937 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.339950 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:37Z","lastTransitionTime":"2026-01-29T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.442688 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.442733 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.442746 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.442778 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.442793 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:37Z","lastTransitionTime":"2026-01-29T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.484822 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tlwgt_f66b658d-e5ec-445e-9494-0a0062e87c4c/kube-multus/0.log" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.484912 4620 generic.go:334] "Generic (PLEG): container finished" podID="f66b658d-e5ec-445e-9494-0a0062e87c4c" containerID="45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b" exitCode=1 Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.484971 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tlwgt" event={"ID":"f66b658d-e5ec-445e-9494-0a0062e87c4c","Type":"ContainerDied","Data":"45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b"} Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.485912 4620 scope.go:117] "RemoveContainer" containerID="45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.502089 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.527315 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.546385 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.546572 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.546596 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.546605 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.546621 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.546630 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:37Z","lastTransitionTime":"2026-01-29T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.565978 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.582422 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.599574 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.619303 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.632713 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.647722 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.649445 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.649475 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.649545 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.649563 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.649574 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:37Z","lastTransitionTime":"2026-01-29T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.663314 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"2026-01-29T15:01:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb\\\\n2026-01-29T15:01:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb to /host/opt/cni/bin/\\\\n2026-01-29T15:01:51Z [verbose] multus-daemon started\\\\n2026-01-29T15:01:51Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:02:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.679143 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.695659 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.711469 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.711529 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.711559 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.711778 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.711810 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.711828 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.711843 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.711834 4620 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.711850 4620 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.712018 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:03:41.711986098 +0000 UTC m=+162.324813743 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.712088 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:03:41.71205923 +0000 UTC m=+162.324887055 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.711861 4620 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.712143 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:03:41.712132363 +0000 UTC m=+162.324960188 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.713058 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.729256 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.743523 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.753174 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.753236 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.753248 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.753270 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.753285 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:37Z","lastTransitionTime":"2026-01-29T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.760045 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.775277 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.799838 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:18Z\\\",\\\"message\\\":\\\"olicy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:18.145659 6430 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, bui\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.856933 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.857013 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.857032 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.857053 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.857066 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:37Z","lastTransitionTime":"2026-01-29T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.871951 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.872584 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.872837 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:37 crc kubenswrapper[4620]: E0129 15:02:37.873057 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.968930 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.969008 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.969022 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.969045 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:37 crc kubenswrapper[4620]: I0129 15:02:37.969060 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:37Z","lastTransitionTime":"2026-01-29T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.073137 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.073194 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.073205 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.073226 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.073238 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:38Z","lastTransitionTime":"2026-01-29T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.106251 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:47:22.49987549 +0000 UTC Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.181796 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.181921 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.181938 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.181978 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.181992 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:38Z","lastTransitionTime":"2026-01-29T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.285135 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.285641 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.285781 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.285901 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.286006 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:38Z","lastTransitionTime":"2026-01-29T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.389975 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.390430 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.390524 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.390646 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.390787 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:38Z","lastTransitionTime":"2026-01-29T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.504735 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.504816 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.504828 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.504847 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.504878 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:38Z","lastTransitionTime":"2026-01-29T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.507001 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tlwgt_f66b658d-e5ec-445e-9494-0a0062e87c4c/kube-multus/0.log" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.507185 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tlwgt" event={"ID":"f66b658d-e5ec-445e-9494-0a0062e87c4c","Type":"ContainerStarted","Data":"ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d"} Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.530253 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.551880 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:18Z\\\",\\\"message\\\":\\\"olicy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:18.145659 6430 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, bui\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.570001 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.586301 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.606157 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.609102 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.609353 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.609623 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.609889 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.610115 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:38Z","lastTransitionTime":"2026-01-29T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.627358 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.646666 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.664534 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.680936 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.699201 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"2026-01-29T15:01:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb\\\\n2026-01-29T15:01:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb to /host/opt/cni/bin/\\\\n2026-01-29T15:01:51Z [verbose] multus-daemon started\\\\n2026-01-29T15:01:51Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:02:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.713858 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.713911 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.713926 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.713949 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.713969 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:38Z","lastTransitionTime":"2026-01-29T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.720422 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.743141 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.761399 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.777889 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.799590 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.816595 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.816939 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.816958 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.816970 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.816989 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.817001 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:38Z","lastTransitionTime":"2026-01-29T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.832157 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.849017 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.874515 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:38 crc kubenswrapper[4620]: E0129 15:02:38.874689 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.874923 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:38 crc kubenswrapper[4620]: E0129 15:02:38.874975 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.920812 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.921191 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.921345 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.921468 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:38 crc kubenswrapper[4620]: I0129 15:02:38.921565 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:38Z","lastTransitionTime":"2026-01-29T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.025715 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.025808 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.025820 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.025843 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.025859 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:39Z","lastTransitionTime":"2026-01-29T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.106568 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:51:59.883074409 +0000 UTC Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.128818 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.129292 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.129384 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.129466 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.129529 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:39Z","lastTransitionTime":"2026-01-29T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.232554 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.232653 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.232672 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.232694 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.232707 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:39Z","lastTransitionTime":"2026-01-29T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.336528 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.336585 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.336603 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.336629 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.336643 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:39Z","lastTransitionTime":"2026-01-29T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.439826 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.440265 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.440370 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.440469 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.440559 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:39Z","lastTransitionTime":"2026-01-29T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.543664 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.543719 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.543733 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.543775 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.543791 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:39Z","lastTransitionTime":"2026-01-29T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.646740 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.646811 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.646826 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.646851 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.646863 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:39Z","lastTransitionTime":"2026-01-29T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.750416 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.750458 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.750468 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.750486 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.750499 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:39Z","lastTransitionTime":"2026-01-29T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.855242 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.855307 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.855319 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.855341 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.855358 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:39Z","lastTransitionTime":"2026-01-29T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.872491 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:39 crc kubenswrapper[4620]: E0129 15:02:39.872815 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.877411 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:39 crc kubenswrapper[4620]: E0129 15:02:39.877723 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.958444 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.958495 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.958509 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.958531 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:39 crc kubenswrapper[4620]: I0129 15:02:39.958548 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:39Z","lastTransitionTime":"2026-01-29T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.062101 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.062154 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.062164 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.062185 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.062196 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:40Z","lastTransitionTime":"2026-01-29T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.107218 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:26:26.79436067 +0000 UTC Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.166884 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.166941 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.166953 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.166977 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.166993 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:40Z","lastTransitionTime":"2026-01-29T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.270590 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.271108 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.271232 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.271343 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.271415 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:40Z","lastTransitionTime":"2026-01-29T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.373917 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.374192 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.374275 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.374400 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.374485 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:40Z","lastTransitionTime":"2026-01-29T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.478491 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.478582 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.478598 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.478677 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.478717 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:40Z","lastTransitionTime":"2026-01-29T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.582064 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.582112 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.582127 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.582155 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.582174 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:40Z","lastTransitionTime":"2026-01-29T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.685377 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.685414 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.685423 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.685438 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.685453 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:40Z","lastTransitionTime":"2026-01-29T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.787721 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.787788 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.787799 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.787819 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.787835 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:40Z","lastTransitionTime":"2026-01-29T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.872023 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:40 crc kubenswrapper[4620]: E0129 15:02:40.872356 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.872426 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:40 crc kubenswrapper[4620]: E0129 15:02:40.874212 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.892075 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.893173 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.893217 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.893253 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.893277 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.893293 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:40Z","lastTransitionTime":"2026-01-29T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.895433 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.911680 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.924940 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.935517 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.947934 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.962113 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"2026-01-29T15:01:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb\\\\n2026-01-29T15:01:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb to /host/opt/cni/bin/\\\\n2026-01-29T15:01:51Z [verbose] multus-daemon started\\\\n2026-01-29T15:01:51Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:02:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.979501 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.994485 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.995586 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.995616 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.995627 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.995645 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:40 crc kubenswrapper[4620]: I0129 15:02:40.995657 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:40Z","lastTransitionTime":"2026-01-29T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.007357 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.018865 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.032111 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.043834 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.058026 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.073136 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.087499 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.099960 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.100200 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.100217 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.100239 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.100253 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:41Z","lastTransitionTime":"2026-01-29T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.102464 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.107600 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 08:30:09.973782803 +0000 UTC Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.117926 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.136734 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:18Z\\\",\\\"message\\\":\\\"olicy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:18.145659 6430 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, bui\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.203558 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.203630 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.203648 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.203670 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.203690 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:41Z","lastTransitionTime":"2026-01-29T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.306349 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.306395 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.306406 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.306427 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.306439 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:41Z","lastTransitionTime":"2026-01-29T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.409648 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.409688 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.409698 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.409715 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.409730 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:41Z","lastTransitionTime":"2026-01-29T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.512749 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.512832 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.512845 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.512871 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.512885 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:41Z","lastTransitionTime":"2026-01-29T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.615607 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.615649 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.615659 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.615675 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.615686 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:41Z","lastTransitionTime":"2026-01-29T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.719256 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.719312 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.719327 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.719352 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.719368 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:41Z","lastTransitionTime":"2026-01-29T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.822120 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.822173 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.822188 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.822294 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.822316 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:41Z","lastTransitionTime":"2026-01-29T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.872345 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.872424 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:41 crc kubenswrapper[4620]: E0129 15:02:41.872617 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:41 crc kubenswrapper[4620]: E0129 15:02:41.872868 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.925978 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.926087 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.926104 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.926127 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:41 crc kubenswrapper[4620]: I0129 15:02:41.926164 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:41Z","lastTransitionTime":"2026-01-29T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.028907 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.028975 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.028986 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.029012 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.029024 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:42Z","lastTransitionTime":"2026-01-29T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.108796 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:31:17.57672957 +0000 UTC Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.132925 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.132989 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.133004 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.133026 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.133356 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:42Z","lastTransitionTime":"2026-01-29T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.236721 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.236785 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.236801 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.236843 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.236861 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:42Z","lastTransitionTime":"2026-01-29T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.339507 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.339577 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.339589 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.339612 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.339626 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:42Z","lastTransitionTime":"2026-01-29T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.443594 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.443640 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.443652 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.443672 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.443685 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:42Z","lastTransitionTime":"2026-01-29T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.546152 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.546206 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.546220 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.546244 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.546259 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:42Z","lastTransitionTime":"2026-01-29T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.649386 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.649438 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.649451 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.649475 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.649488 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:42Z","lastTransitionTime":"2026-01-29T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.752513 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.752580 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.752592 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.752609 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.752619 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:42Z","lastTransitionTime":"2026-01-29T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.857025 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.857109 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.857139 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.857167 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.857184 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:42Z","lastTransitionTime":"2026-01-29T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.871724 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.871795 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:42 crc kubenswrapper[4620]: E0129 15:02:42.871953 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:42 crc kubenswrapper[4620]: E0129 15:02:42.872012 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.960307 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.960359 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.960370 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.960390 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:42 crc kubenswrapper[4620]: I0129 15:02:42.960402 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:42Z","lastTransitionTime":"2026-01-29T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.062730 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.062810 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.062820 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.062840 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.062852 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:43Z","lastTransitionTime":"2026-01-29T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.109327 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:59:27.19542075 +0000 UTC Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.165544 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.165593 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.165604 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.165624 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.165637 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:43Z","lastTransitionTime":"2026-01-29T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.268092 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.268140 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.268153 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.268176 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.268188 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:43Z","lastTransitionTime":"2026-01-29T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.372470 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.372538 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.372553 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.372598 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.372616 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:43Z","lastTransitionTime":"2026-01-29T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.475958 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.476000 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.476032 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.476051 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.476063 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:43Z","lastTransitionTime":"2026-01-29T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.578827 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.578883 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.578899 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.578918 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.578929 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:43Z","lastTransitionTime":"2026-01-29T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.688292 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.688379 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.688407 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.688462 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.688543 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:43Z","lastTransitionTime":"2026-01-29T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.791315 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.791358 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.791368 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.791389 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.791399 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:43Z","lastTransitionTime":"2026-01-29T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.872169 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.872228 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:43 crc kubenswrapper[4620]: E0129 15:02:43.872370 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:43 crc kubenswrapper[4620]: E0129 15:02:43.872567 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.895227 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.895279 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.895295 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.895318 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.895330 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:43Z","lastTransitionTime":"2026-01-29T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.998430 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.998481 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.998496 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.998515 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:43 crc kubenswrapper[4620]: I0129 15:02:43.998526 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:43Z","lastTransitionTime":"2026-01-29T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.101258 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.101316 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.101330 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.101352 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.101366 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:44Z","lastTransitionTime":"2026-01-29T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.110526 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:57:41.686545769 +0000 UTC Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.204229 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.204265 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.204274 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.204288 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.204298 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:44Z","lastTransitionTime":"2026-01-29T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.307020 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.307095 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.307107 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.307129 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.307144 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:44Z","lastTransitionTime":"2026-01-29T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.410540 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.410621 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.410643 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.410673 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.410693 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:44Z","lastTransitionTime":"2026-01-29T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.515604 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.515681 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.515697 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.515727 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.515746 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:44Z","lastTransitionTime":"2026-01-29T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.619058 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.619137 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.619151 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.619175 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.619190 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:44Z","lastTransitionTime":"2026-01-29T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.723229 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.723341 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.723360 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.723390 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.723412 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:44Z","lastTransitionTime":"2026-01-29T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.826706 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.826824 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.826837 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.826857 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.826874 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:44Z","lastTransitionTime":"2026-01-29T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.873052 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.873218 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:44 crc kubenswrapper[4620]: E0129 15:02:44.873252 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:44 crc kubenswrapper[4620]: E0129 15:02:44.873511 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.929620 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.929663 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.929674 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.929695 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:44 crc kubenswrapper[4620]: I0129 15:02:44.929709 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:44Z","lastTransitionTime":"2026-01-29T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.032429 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.032467 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.032479 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.032496 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.032508 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:45Z","lastTransitionTime":"2026-01-29T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.111689 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 07:59:49.050757559 +0000 UTC Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.135323 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.135386 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.135411 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.135443 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.135463 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:45Z","lastTransitionTime":"2026-01-29T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.238679 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.238811 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.238838 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.238869 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.238887 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:45Z","lastTransitionTime":"2026-01-29T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.341406 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.341445 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.341456 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.341472 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.341483 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:45Z","lastTransitionTime":"2026-01-29T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.408511 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:45 crc kubenswrapper[4620]: E0129 15:02:45.408812 4620 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:02:45 crc kubenswrapper[4620]: E0129 15:02:45.408961 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs podName:82634d3f-d985-4384-bd37-426d509d4e57 nodeName:}" failed. No retries permitted until 2026-01-29 15:03:49.40892723 +0000 UTC m=+170.021754985 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs") pod "network-metrics-daemon-twqvf" (UID: "82634d3f-d985-4384-bd37-426d509d4e57") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.444116 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.444161 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.444174 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.444192 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.444206 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:45Z","lastTransitionTime":"2026-01-29T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.546502 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.546553 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.546565 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.546587 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.546600 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:45Z","lastTransitionTime":"2026-01-29T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.649790 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.649854 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.649864 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.649886 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.649904 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:45Z","lastTransitionTime":"2026-01-29T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.752870 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.752935 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.752967 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.752997 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.753016 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:45Z","lastTransitionTime":"2026-01-29T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.856048 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.856103 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.856206 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.856227 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.856241 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:45Z","lastTransitionTime":"2026-01-29T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.871794 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.872225 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:45 crc kubenswrapper[4620]: E0129 15:02:45.872287 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.872873 4620 scope.go:117] "RemoveContainer" containerID="95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe" Jan 29 15:02:45 crc kubenswrapper[4620]: E0129 15:02:45.872948 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.959578 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.959644 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.959654 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.959687 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:45 crc kubenswrapper[4620]: I0129 15:02:45.959698 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:45Z","lastTransitionTime":"2026-01-29T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.063084 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.063164 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.063204 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.063230 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.063246 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.112491 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:35:54.398035529 +0000 UTC Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.166554 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.166600 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.166615 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.166641 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.166658 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.270267 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.270388 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.270415 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.270442 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.270459 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.373384 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.373451 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.373462 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.373481 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.373496 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.476118 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.476163 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.476175 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.476200 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.476211 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.542102 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/2.log" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.545454 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.546085 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.563685 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.579163 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.579499 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.579542 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.579553 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.579574 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.579587 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.593713 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.606534 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.622005 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.637343 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.656144 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.682082 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.682132 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.682145 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.682168 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.682181 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.685258 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:18Z\\\",\\\"message\\\":\\\"olicy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:18.145659 6430 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, bui\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.698564 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.713486 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.733283 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ff4d4e-4bcb-4b40-a3b3-f758dd585c12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d0d37946b4822cdc7359c565e4051fe47e6143635578d8f374f8812c8578884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0cc1b3997d3c829f69452799782bb197381cb8be815f777188e16e1adfe4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92f3105ec96740d24a846262f9df0217ec7d314ac07fbe350b501e26cf09a2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e5a8b59bcca4ac5f3d97ced941ad84b595be47e88664fc539f24e923d7dded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12d36335971ff4a6460a0ba01bbdd6e49d59240ccb10c873227c513fb31fa32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://927e0f3fffbe34e4fec1f1a922c5609ddbaec89397d70d4fea427680581018b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://927e0f3fffbe34e4fec1f1a922c5609ddbaec89397d70d4fea427680581018b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d019cc8dc5e4f8a915e56a1809b48339daee0e87cfd488976c5d9df2227635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d019cc8dc5e4f8a915e56a1809b48339daee0e87cfd488976c5d9df2227635\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad37ca55f5c2175aec69507894da9fc93685f4290d233a05d4b375f40e8c8577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad37ca55f5c2175aec69507894da9fc93685f4290d233a05d4b375f40e8c8577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.748984 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.785284 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.785355 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.785366 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.785390 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.785403 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.787109 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.805328 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.805412 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.805430 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.805462 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.805479 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.817440 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: E0129 15:02:46.828097 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.835084 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.835146 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.835161 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.835185 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.835205 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.854903 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: E0129 15:02:46.869348 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.871888 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.872009 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:46 crc kubenswrapper[4620]: E0129 15:02:46.872130 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:46 crc kubenswrapper[4620]: E0129 15:02:46.872335 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.878917 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.878952 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.878963 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.878983 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.878995 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.885202 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"2026-01-29T15:01:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb\\\\n2026-01-29T15:01:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb to /host/opt/cni/bin/\\\\n2026-01-29T15:01:51Z [verbose] multus-daemon started\\\\n2026-01-29T15:01:51Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:02:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: E0129 15:02:46.894316 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.898342 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.898380 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.898389 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.898409 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.898419 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.902419 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: E0129 15:02:46.912089 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.920872 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.921048 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.921108 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.921119 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.921145 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.921159 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.941038 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: E0129 15:02:46.941283 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:46 crc kubenswrapper[4620]: E0129 15:02:46.941459 4620 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.943690 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.943729 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.943742 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.943781 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:46 crc kubenswrapper[4620]: I0129 15:02:46.943801 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:46Z","lastTransitionTime":"2026-01-29T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.052649 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.052704 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.052991 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.053014 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.053026 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:47Z","lastTransitionTime":"2026-01-29T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.112853 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:47:35.440479244 +0000 UTC Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.155424 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.155501 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.155516 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.155542 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.155557 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:47Z","lastTransitionTime":"2026-01-29T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.273085 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.273146 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.273166 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.273192 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.273228 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:47Z","lastTransitionTime":"2026-01-29T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.376277 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.376341 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.376353 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.376373 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.376389 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:47Z","lastTransitionTime":"2026-01-29T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.479270 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.479314 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.479325 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.479343 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.479354 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:47Z","lastTransitionTime":"2026-01-29T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.582189 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.582223 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.582232 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.582249 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.582259 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:47Z","lastTransitionTime":"2026-01-29T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.684534 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.684598 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.684610 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.684628 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.684638 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:47Z","lastTransitionTime":"2026-01-29T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.788648 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.788727 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.788743 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.788825 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.788846 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:47Z","lastTransitionTime":"2026-01-29T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.871825 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.871960 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:47 crc kubenswrapper[4620]: E0129 15:02:47.872492 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:47 crc kubenswrapper[4620]: E0129 15:02:47.872581 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.892321 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.892382 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.892399 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.892418 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.892432 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:47Z","lastTransitionTime":"2026-01-29T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.995221 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.995719 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.995833 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.995935 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:47 crc kubenswrapper[4620]: I0129 15:02:47.996029 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:47Z","lastTransitionTime":"2026-01-29T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.099780 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.099838 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.099848 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.099894 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.099905 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:48Z","lastTransitionTime":"2026-01-29T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.114014 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 15:53:55.42468845 +0000 UTC Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.203152 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.203224 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.203248 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.203280 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.203302 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:48Z","lastTransitionTime":"2026-01-29T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.306294 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.306359 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.306384 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.306411 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.306428 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:48Z","lastTransitionTime":"2026-01-29T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.410125 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.410219 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.410233 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.410252 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.410263 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:48Z","lastTransitionTime":"2026-01-29T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.512897 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.512961 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.512970 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.512993 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.513004 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:48Z","lastTransitionTime":"2026-01-29T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.555003 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/3.log" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.555722 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/2.log" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.558723 4620 generic.go:334] "Generic (PLEG): container finished" podID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerID="375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9" exitCode=1 Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.558799 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9"} Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.558856 4620 scope.go:117] "RemoveContainer" containerID="95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.559725 4620 scope.go:117] "RemoveContainer" containerID="375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9" Jan 29 15:02:48 crc kubenswrapper[4620]: E0129 15:02:48.559951 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.578213 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.601028 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:18Z\\\",\\\"message\\\":\\\"olicy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:18.145659 6430 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, bui\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:47Z\\\",\\\"message\\\":\\\"machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.16\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0129 15:02:47.590430 6753 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.616183 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.616237 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.616249 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.616270 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.616285 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:48Z","lastTransitionTime":"2026-01-29T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.627146 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ff4d4e-4bcb-4b40-a3b3-f758dd585c12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d0d37946b4822cdc7359c565e4051fe47e6143635578d8f374f8812c8578884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0cc1b3997d3c829f69452799782bb197381cb8be815f777188e16e1adfe4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92f3105ec96740d24a846262f9df0217ec7d314ac07fbe350b501e26cf09a2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e5a8b59bcca4ac5f3d97ced941ad84b595be47e88664fc539f24e923d7dded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12d36335971ff4a6460a0ba01bbdd6e49d59240ccb10c873227c513fb31fa32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://927e0f3fffbe34e4fec1f1a922c5609ddbaec89397d70d4fea427680581018b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://927e0f3fffbe34e4fec1f1a922c5609ddbaec89397d70d4fea427680581018b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d019cc8dc5e4f8a915e56a1809b48339daee0e87cfd488976c5d9df2227635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d019cc8dc5e4f8a915e56a1809b48339daee0e87cfd488976c5d9df2227635\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad37ca55f5c2175aec69507894da9fc93685f4290d233a05d4b375f40e8c8577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad37ca55f5c2175aec69507894da9fc93685f4290d233a05d4b375f40e8c8577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.642265 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.655407 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.668546 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.684627 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.701903 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"2026-01-29T15:01:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb\\\\n2026-01-29T15:01:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb to /host/opt/cni/bin/\\\\n2026-01-29T15:01:51Z [verbose] multus-daemon started\\\\n2026-01-29T15:01:51Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:02:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.719974 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.719953 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.720888 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.720917 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.720947 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.720968 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:48Z","lastTransitionTime":"2026-01-29T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.739897 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.756536 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.771966 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.785282 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.799609 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.814730 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.824367 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.824398 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.824408 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.824428 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.824441 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:48Z","lastTransitionTime":"2026-01-29T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.829412 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.845860 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.864131 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.872198 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.872347 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:48 crc kubenswrapper[4620]: E0129 15:02:48.872392 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:48 crc kubenswrapper[4620]: E0129 15:02:48.872597 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.887669 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.927569 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.927629 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.927668 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.927690 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:48 crc kubenswrapper[4620]: I0129 15:02:48.927704 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:48Z","lastTransitionTime":"2026-01-29T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.031078 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.031131 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.031145 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.031172 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.031193 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:49Z","lastTransitionTime":"2026-01-29T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.115077 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 02:05:16.988734123 +0000 UTC Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.135194 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.135257 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.135269 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.135292 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.135310 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:49Z","lastTransitionTime":"2026-01-29T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.237863 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.237913 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.237928 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.237946 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.237960 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:49Z","lastTransitionTime":"2026-01-29T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.340656 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.340721 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.340741 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.340794 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.340815 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:49Z","lastTransitionTime":"2026-01-29T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.443374 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.443417 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.443427 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.443442 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.443453 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:49Z","lastTransitionTime":"2026-01-29T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.546324 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.546378 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.546387 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.546401 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.546430 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:49Z","lastTransitionTime":"2026-01-29T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.564124 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/3.log" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.648691 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.648806 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.648834 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.648865 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.648888 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:49Z","lastTransitionTime":"2026-01-29T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.752999 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.753076 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.753091 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.753114 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.753130 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:49Z","lastTransitionTime":"2026-01-29T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.856717 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.856820 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.856836 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.856874 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.856888 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:49Z","lastTransitionTime":"2026-01-29T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.872445 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:49 crc kubenswrapper[4620]: E0129 15:02:49.872583 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.872611 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:49 crc kubenswrapper[4620]: E0129 15:02:49.872902 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.960267 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.960348 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.960362 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.960513 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:49 crc kubenswrapper[4620]: I0129 15:02:49.960531 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:49Z","lastTransitionTime":"2026-01-29T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.064348 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.064384 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.064393 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.064413 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.064423 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:50Z","lastTransitionTime":"2026-01-29T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.115940 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:58:58.588250998 +0000 UTC Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.167368 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.167498 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.167514 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.167536 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.167550 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:50Z","lastTransitionTime":"2026-01-29T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.270699 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.270770 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.270785 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.270807 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.270821 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:50Z","lastTransitionTime":"2026-01-29T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.373568 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.373618 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.373660 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.373680 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.373723 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:50Z","lastTransitionTime":"2026-01-29T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.476902 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.476966 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.476979 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.477002 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.477024 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:50Z","lastTransitionTime":"2026-01-29T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.580610 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.580662 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.580678 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.580701 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.580715 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:50Z","lastTransitionTime":"2026-01-29T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.683550 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.683607 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.683626 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.683655 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.683676 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:50Z","lastTransitionTime":"2026-01-29T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.786983 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.787021 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.787030 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.787049 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.787061 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:50Z","lastTransitionTime":"2026-01-29T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.872491 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:50 crc kubenswrapper[4620]: E0129 15:02:50.872724 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.873054 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:50 crc kubenswrapper[4620]: E0129 15:02:50.873117 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.889984 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.890543 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.890582 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.890592 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.890630 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.890643 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:50Z","lastTransitionTime":"2026-01-29T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.907616 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.926495 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.941005 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.955194 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.969018 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"2026-01-29T15:01:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb\\\\n2026-01-29T15:01:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb to /host/opt/cni/bin/\\\\n2026-01-29T15:01:51Z [verbose] multus-daemon started\\\\n2026-01-29T15:01:51Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:02:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.981533 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.992682 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.993019 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.993101 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.993170 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.993237 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:50Z","lastTransitionTime":"2026-01-29T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:50 crc kubenswrapper[4620]: I0129 15:02:50.996528 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.010868 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.026715 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.041312 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.057327 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.073812 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.095992 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.096034 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.096047 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.096065 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.095683 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95294f6da283eeca7d19c770dc6c0201585b99d875f167fc8e09e27683c2d6fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:18Z\\\",\\\"message\\\":\\\"olicy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:18Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:02:18.145659 6430 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, bui\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:47Z\\\",\\\"message\\\":\\\"machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.16\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0129 15:02:47.590430 6753 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.096076 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:51Z","lastTransitionTime":"2026-01-29T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.116358 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:24:50.096777009 +0000 UTC Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.123490 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ff4d4e-4bcb-4b40-a3b3-f758dd585c12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d0d37946b4822cdc7359c565e4051fe47e6143635578d8f374f8812c8578884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0cc1b3997d3c829f69452799782bb197381cb8be815f777188e16e1adfe4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92f3105ec96740d24a846262f9df0217ec7d314ac07fbe350b501e26cf09a2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e5a8b59bcca4ac5f3d97ced941ad84b595be47e88664fc539f24e923d7dded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12d36335971ff4a6460a0ba01bbdd6e49d59240ccb10c873227c513fb31fa32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://927e0f3fffbe34e4fec1f1a922c5609ddbaec89397d70d4fea427680581018b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://927e0f3fffbe34e4fec1f1a922c5609ddbaec89397d70d4fea427680581018b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d019cc8dc5e4f8a915e56a1809b48339daee0e87cfd488976c5d9df2227635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d019cc8dc5e4f8a915e56a1809b48339daee0e87cfd488976c5d9df2227635\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad37ca55f5c2175aec69507894da9fc93685f4290d233a05d4b375f40e8c8577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad37ca55f5c2175aec69507894da9fc93685f4290d233a05d4b375f40e8c8577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.142231 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.159249 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.175679 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.195255 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.199453 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.199633 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.199726 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.199850 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.199928 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:51Z","lastTransitionTime":"2026-01-29T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.303018 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.303481 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.303558 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.303623 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.303704 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:51Z","lastTransitionTime":"2026-01-29T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.406872 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.406920 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.406934 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.406957 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.406974 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:51Z","lastTransitionTime":"2026-01-29T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.510260 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.510321 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.510339 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.510365 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.510382 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:51Z","lastTransitionTime":"2026-01-29T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.613598 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.613647 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.613664 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.613692 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.613709 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:51Z","lastTransitionTime":"2026-01-29T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.717429 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.717941 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.717951 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.717970 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.717982 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:51Z","lastTransitionTime":"2026-01-29T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.821351 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.821406 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.821419 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.821441 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.821453 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:51Z","lastTransitionTime":"2026-01-29T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.872627 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.872648 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:51 crc kubenswrapper[4620]: E0129 15:02:51.872865 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:51 crc kubenswrapper[4620]: E0129 15:02:51.873029 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.925212 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.925277 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.925290 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.925310 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:51 crc kubenswrapper[4620]: I0129 15:02:51.925322 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:51Z","lastTransitionTime":"2026-01-29T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.028795 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.028906 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.028931 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.028971 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.028994 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:52Z","lastTransitionTime":"2026-01-29T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.117407 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:17:25.763058648 +0000 UTC Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.132507 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.133026 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.133120 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.133254 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.133375 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:52Z","lastTransitionTime":"2026-01-29T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.237057 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.237110 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.237121 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.237146 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.237162 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:52Z","lastTransitionTime":"2026-01-29T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.340259 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.340305 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.340323 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.340343 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.340355 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:52Z","lastTransitionTime":"2026-01-29T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.443587 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.443629 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.443637 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.443653 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.443667 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:52Z","lastTransitionTime":"2026-01-29T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.547012 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.547065 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.547080 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.547101 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.547115 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:52Z","lastTransitionTime":"2026-01-29T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.649658 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.649730 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.649745 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.649997 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.650014 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:52Z","lastTransitionTime":"2026-01-29T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.753316 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.753421 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.753452 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.753491 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.753519 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:52Z","lastTransitionTime":"2026-01-29T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.869306 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.869405 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.869423 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.869452 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.869469 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:52Z","lastTransitionTime":"2026-01-29T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.871745 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.871811 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:52 crc kubenswrapper[4620]: E0129 15:02:52.872091 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:52 crc kubenswrapper[4620]: E0129 15:02:52.872387 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.972187 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.972241 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.972255 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.972276 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:52 crc kubenswrapper[4620]: I0129 15:02:52.972289 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:52Z","lastTransitionTime":"2026-01-29T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.075749 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.075879 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.075900 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.075934 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.075955 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:53Z","lastTransitionTime":"2026-01-29T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.117982 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:43:09.748045723 +0000 UTC Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.181296 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.181351 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.181362 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.181378 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.181396 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:53Z","lastTransitionTime":"2026-01-29T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.285296 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.285372 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.285391 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.285416 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.285432 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:53Z","lastTransitionTime":"2026-01-29T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.388936 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.388994 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.389011 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.389037 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.389056 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:53Z","lastTransitionTime":"2026-01-29T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.492291 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.492365 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.492388 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.492418 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.492441 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:53Z","lastTransitionTime":"2026-01-29T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.595436 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.595532 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.595558 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.595594 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.595622 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:53Z","lastTransitionTime":"2026-01-29T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.698952 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.699022 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.699044 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.699070 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.699088 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:53Z","lastTransitionTime":"2026-01-29T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.803268 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.803312 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.803324 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.803342 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.803354 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:53Z","lastTransitionTime":"2026-01-29T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.872326 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.872413 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:53 crc kubenswrapper[4620]: E0129 15:02:53.872543 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:53 crc kubenswrapper[4620]: E0129 15:02:53.872868 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.906779 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.906823 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.906834 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.906854 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:53 crc kubenswrapper[4620]: I0129 15:02:53.906867 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:53Z","lastTransitionTime":"2026-01-29T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.009524 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.009574 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.009588 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.009608 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.009622 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:54Z","lastTransitionTime":"2026-01-29T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.112675 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.112741 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.112773 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.112796 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.112818 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:54Z","lastTransitionTime":"2026-01-29T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.118832 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 10:38:08.378642866 +0000 UTC Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.217548 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.217610 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.217629 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.217655 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.217674 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:54Z","lastTransitionTime":"2026-01-29T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.321041 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.321094 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.321111 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.321137 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.321154 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:54Z","lastTransitionTime":"2026-01-29T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.423828 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.423871 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.423888 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.423912 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.423929 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:54Z","lastTransitionTime":"2026-01-29T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.527061 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.527117 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.527128 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.527147 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.527158 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:54Z","lastTransitionTime":"2026-01-29T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.630666 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.630735 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.630782 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.630820 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.630849 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:54Z","lastTransitionTime":"2026-01-29T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.734165 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.734243 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.734257 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.734283 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.734300 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:54Z","lastTransitionTime":"2026-01-29T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.838805 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.838856 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.838866 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.838883 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.838896 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:54Z","lastTransitionTime":"2026-01-29T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.872246 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.872291 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:54 crc kubenswrapper[4620]: E0129 15:02:54.872429 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:54 crc kubenswrapper[4620]: E0129 15:02:54.872563 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.942296 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.942344 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.942358 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.942382 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:54 crc kubenswrapper[4620]: I0129 15:02:54.942394 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:54Z","lastTransitionTime":"2026-01-29T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.050145 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.050204 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.050219 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.050245 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.050288 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:55Z","lastTransitionTime":"2026-01-29T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.119208 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:17:46.610620893 +0000 UTC Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.153992 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.154056 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.154066 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.154087 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.154101 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:55Z","lastTransitionTime":"2026-01-29T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.258376 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.258418 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.258426 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.258446 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.258456 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:55Z","lastTransitionTime":"2026-01-29T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.361864 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.361928 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.361939 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.361960 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.361971 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:55Z","lastTransitionTime":"2026-01-29T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.465529 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.465589 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.465606 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.465637 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.465656 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:55Z","lastTransitionTime":"2026-01-29T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.568393 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.568436 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.568447 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.568462 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.568473 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:55Z","lastTransitionTime":"2026-01-29T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.670822 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.670877 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.670891 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.670912 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.670924 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:55Z","lastTransitionTime":"2026-01-29T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.773521 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.773569 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.773583 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.773600 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.773612 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:55Z","lastTransitionTime":"2026-01-29T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.871772 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.871863 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:55 crc kubenswrapper[4620]: E0129 15:02:55.872490 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:55 crc kubenswrapper[4620]: E0129 15:02:55.872697 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.877855 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.877881 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.877889 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.877902 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.877911 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:55Z","lastTransitionTime":"2026-01-29T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.980915 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.980981 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.980992 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.981013 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:55 crc kubenswrapper[4620]: I0129 15:02:55.981025 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:55Z","lastTransitionTime":"2026-01-29T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.083577 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.083619 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.083631 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.083646 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.083658 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:56Z","lastTransitionTime":"2026-01-29T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.119378 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:06:14.422378239 +0000 UTC Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.187442 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.187489 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.187498 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.187516 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.187527 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:56Z","lastTransitionTime":"2026-01-29T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.290842 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.290914 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.290928 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.290954 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.290970 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:56Z","lastTransitionTime":"2026-01-29T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.394369 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.394419 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.394435 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.394455 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.394472 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:56Z","lastTransitionTime":"2026-01-29T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.497881 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.497962 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.497994 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.498022 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.498041 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:56Z","lastTransitionTime":"2026-01-29T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.600110 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.600162 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.600170 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.600192 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.600205 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:56Z","lastTransitionTime":"2026-01-29T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.702969 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.703038 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.703054 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.703073 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.703085 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:56Z","lastTransitionTime":"2026-01-29T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.805712 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.805764 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.805777 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.805793 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.805804 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:56Z","lastTransitionTime":"2026-01-29T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.872320 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.872320 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:56 crc kubenswrapper[4620]: E0129 15:02:56.872503 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:56 crc kubenswrapper[4620]: E0129 15:02:56.872572 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.908656 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.908714 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.908727 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.908770 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:56 crc kubenswrapper[4620]: I0129 15:02:56.908787 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:56Z","lastTransitionTime":"2026-01-29T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.011656 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.011700 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.011710 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.011723 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.011733 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.028874 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.028954 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.028966 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.028987 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.029002 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: E0129 15:02:57.044375 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.048546 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.048594 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.048602 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.048620 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.048632 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: E0129 15:02:57.060771 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.064063 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.064091 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.064099 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.064114 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.064123 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: E0129 15:02:57.081403 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.087043 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.087093 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.087106 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.087128 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.087145 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: E0129 15:02:57.101802 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.106691 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.106724 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.106733 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.106749 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.106778 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: E0129 15:02:57.118362 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:57 crc kubenswrapper[4620]: E0129 15:02:57.118530 4620 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.119615 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:42:43.493085905 +0000 UTC Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.120439 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.120511 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.120530 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.120551 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.120567 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.223502 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.223546 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.223555 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.223570 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.223579 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.325872 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.325918 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.325931 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.325946 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.325957 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.428391 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.428436 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.428445 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.428464 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.428473 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.530537 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.530574 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.530583 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.530596 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.530614 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.634033 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.634076 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.634087 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.634104 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.634114 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.738623 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.738674 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.738686 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.738706 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.738720 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.841789 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.841844 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.841857 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.841881 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.841896 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.872062 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.872270 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:57 crc kubenswrapper[4620]: E0129 15:02:57.872381 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:57 crc kubenswrapper[4620]: E0129 15:02:57.872500 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.945093 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.945152 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.945169 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.945191 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:57 crc kubenswrapper[4620]: I0129 15:02:57.945205 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:57Z","lastTransitionTime":"2026-01-29T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.047115 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.047179 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.047191 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.047216 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.047229 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:58Z","lastTransitionTime":"2026-01-29T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.119797 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:26:05.746292358 +0000 UTC Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.149670 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.149739 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.149770 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.149787 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.149801 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:58Z","lastTransitionTime":"2026-01-29T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.252154 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.252190 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.252198 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.252213 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.252222 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:58Z","lastTransitionTime":"2026-01-29T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.354404 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.354442 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.354452 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.354467 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.354479 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:58Z","lastTransitionTime":"2026-01-29T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.457097 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.457136 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.457147 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.457165 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.457175 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:58Z","lastTransitionTime":"2026-01-29T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.560396 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.560436 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.560446 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.560460 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.560470 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:58Z","lastTransitionTime":"2026-01-29T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.662904 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.662929 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.662938 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.662951 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.662961 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:58Z","lastTransitionTime":"2026-01-29T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.766034 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.766316 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.766422 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.766512 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.766597 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:58Z","lastTransitionTime":"2026-01-29T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.869725 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.869814 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.869836 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.869864 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.869880 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:58Z","lastTransitionTime":"2026-01-29T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.872089 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.872121 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:02:58 crc kubenswrapper[4620]: E0129 15:02:58.872304 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:02:58 crc kubenswrapper[4620]: E0129 15:02:58.872378 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.972860 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.972918 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.972933 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.972954 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:58 crc kubenswrapper[4620]: I0129 15:02:58.972970 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:58Z","lastTransitionTime":"2026-01-29T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.076141 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.076191 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.076236 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.076258 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.076272 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:59Z","lastTransitionTime":"2026-01-29T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.120391 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:41:35.374070681 +0000 UTC Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.179353 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.179698 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.179897 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.180082 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.180215 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:59Z","lastTransitionTime":"2026-01-29T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.282981 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.283053 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.283064 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.283086 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.283100 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:59Z","lastTransitionTime":"2026-01-29T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.385807 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.385848 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.385861 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.385880 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.385892 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:59Z","lastTransitionTime":"2026-01-29T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.488420 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.488469 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.488479 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.488500 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.488516 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:59Z","lastTransitionTime":"2026-01-29T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.590652 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.590938 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.591019 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.591107 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.591187 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:59Z","lastTransitionTime":"2026-01-29T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.693685 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.693733 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.693747 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.693790 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.693805 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:59Z","lastTransitionTime":"2026-01-29T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.797344 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.797427 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.797454 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.797489 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.797518 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:59Z","lastTransitionTime":"2026-01-29T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.872235 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.872235 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:02:59 crc kubenswrapper[4620]: E0129 15:02:59.872832 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:02:59 crc kubenswrapper[4620]: E0129 15:02:59.872954 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.874256 4620 scope.go:117] "RemoveContainer" containerID="375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9" Jan 29 15:02:59 crc kubenswrapper[4620]: E0129 15:02:59.874514 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.896210 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ff4d4e-4bcb-4b40-a3b3-f758dd585c12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d0d37946b4822cdc7359c565e4051fe47e6143635578d8f374f8812c8578884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0cc1b3997d3c829f69452799782bb197381cb8be815f777188e16e1adfe4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92f3105ec96740d24a846262f9df0217ec7d314ac07fbe350b501e26cf09a2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e5a8b59bcca4ac5f3d97ced941ad84b595be47e88664fc539f24e923d7dded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12d36335971ff4a6460a0ba01bbdd6e49d59240ccb10c873227c513fb31fa32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://927e0f3fffbe34e4fec1f1a922c5609ddbaec89397d70d4fea427680581018b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://927e0f3fffbe34e4fec1f1a922c5609ddbaec89397d70d4fea427680581018b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d019cc8dc5e4f8a915e56a1809b48339daee0e87cfd488976c5d9df2227635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d019cc8dc5e4f8a915e56a1809b48339daee0e87cfd488976c5d9df2227635\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad37ca55f5c2175aec69507894da9fc93685f4290d233a05d4b375f40e8c8577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad37ca55f5c2175aec69507894da9fc93685f4290d233a05d4b375f40e8c8577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.900034 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.900071 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.900081 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.900097 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.900108 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:02:59Z","lastTransitionTime":"2026-01-29T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.912664 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.926038 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.942299 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.955678 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.967517 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.978663 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.989861 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:02:59 crc kubenswrapper[4620]: I0129 15:02:59.997831 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:02:59Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.001981 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.002029 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.002040 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.002056 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.002067 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:00Z","lastTransitionTime":"2026-01-29T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.008287 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.020156 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"2026-01-29T15:01:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb\\\\n2026-01-29T15:01:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb to /host/opt/cni/bin/\\\\n2026-01-29T15:01:51Z [verbose] multus-daemon started\\\\n2026-01-29T15:01:51Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:02:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.028678 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.037895 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.048782 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.061461 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.071168 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.080552 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.091871 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.104292 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.104332 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.104342 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.104358 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.104368 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:00Z","lastTransitionTime":"2026-01-29T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.109014 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:47Z\\\",\\\"message\\\":\\\"machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.16\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0129 15:02:47.590430 6753 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.121106 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 09:30:04.459233997 +0000 UTC Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.206118 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.206147 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.206171 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.206198 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.206206 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:00Z","lastTransitionTime":"2026-01-29T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.317475 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.317525 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.317538 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.317558 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.317571 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:00Z","lastTransitionTime":"2026-01-29T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.420122 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.420174 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.420189 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.420209 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.420221 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:00Z","lastTransitionTime":"2026-01-29T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.522707 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.522830 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.522845 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.522869 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.522884 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:00Z","lastTransitionTime":"2026-01-29T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.625649 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.625703 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.625720 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.625737 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.625746 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:00Z","lastTransitionTime":"2026-01-29T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.728212 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.728264 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.728273 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.728289 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.728298 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:00Z","lastTransitionTime":"2026-01-29T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:00 crc kubenswrapper[4620]: E0129 15:03:00.828793 4620 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.871502 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:00 crc kubenswrapper[4620]: E0129 15:03:00.871895 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.871914 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:00 crc kubenswrapper[4620]: E0129 15:03:00.872323 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.894144 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.932889 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:47Z\\\",\\\"message\\\":\\\"machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.16\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0129 15:02:47.590430 6753 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.956979 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.974400 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:00 crc kubenswrapper[4620]: E0129 15:03:00.974732 4620 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:03:00 crc kubenswrapper[4620]: I0129 15:03:00.993001 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.015950 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ff4d4e-4bcb-4b40-a3b3-f758dd585c12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d0d37946b4822cdc7359c565e4051fe47e6143635578d8f374f8812c8578884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0cc1b3997d3c829f69452799782bb197381cb8be815f777188e16e1adfe4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92f3105ec96740d24a846262f9df0217ec7d314ac07fbe350b501e26cf09a2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e5a8b59bcca4ac5f3d97ced941ad84b595be47e88664fc539f24e923d7dded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12d36335971ff4a6460a0ba01bbdd6e49d59240ccb10c873227c513fb31fa32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://927e0f3fffbe34e4fec1f1a922c5609ddbaec89397d70d4fea427680581018b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://927e0f3fffbe34e4fec1f1a922c5609ddbaec89397d70d4fea427680581018b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d019cc8dc5e4f8a915e56a1809b48339daee0e87cfd488976c5d9df2227635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d019cc8dc5e4f8a915e56a1809b48339daee0e87cfd488976c5d9df2227635\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad37ca55f5c2175aec69507894da9fc93685f4290d233a05d4b375f40e8c8577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad37ca55f5c2175aec69507894da9fc93685f4290d233a05d4b375f40e8c8577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.038090 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.057920 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.070635 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.083126 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.098689 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"2026-01-29T15:01:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb\\\\n2026-01-29T15:01:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb to /host/opt/cni/bin/\\\\n2026-01-29T15:01:51Z [verbose] multus-daemon started\\\\n2026-01-29T15:01:51Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:02:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.114572 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.121829 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:33:55.486836321 +0000 UTC Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.129255 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.145745 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.161137 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.174520 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.187072 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.198778 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.212168 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.871778 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:01 crc kubenswrapper[4620]: I0129 15:03:01.871883 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:01 crc kubenswrapper[4620]: E0129 15:03:01.871948 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:01 crc kubenswrapper[4620]: E0129 15:03:01.872027 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:02 crc kubenswrapper[4620]: I0129 15:03:02.122594 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:09:10.405964167 +0000 UTC Jan 29 15:03:02 crc kubenswrapper[4620]: I0129 15:03:02.871873 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:02 crc kubenswrapper[4620]: I0129 15:03:02.871990 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:02 crc kubenswrapper[4620]: E0129 15:03:02.872077 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:02 crc kubenswrapper[4620]: E0129 15:03:02.872234 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:03 crc kubenswrapper[4620]: I0129 15:03:03.124897 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:58:10.319002447 +0000 UTC Jan 29 15:03:03 crc kubenswrapper[4620]: I0129 15:03:03.872058 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:03 crc kubenswrapper[4620]: I0129 15:03:03.872058 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:03 crc kubenswrapper[4620]: E0129 15:03:03.872229 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:03 crc kubenswrapper[4620]: E0129 15:03:03.872279 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:04 crc kubenswrapper[4620]: I0129 15:03:04.125930 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:27:15.210352828 +0000 UTC Jan 29 15:03:04 crc kubenswrapper[4620]: I0129 15:03:04.872213 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:04 crc kubenswrapper[4620]: I0129 15:03:04.872217 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:04 crc kubenswrapper[4620]: E0129 15:03:04.872435 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:04 crc kubenswrapper[4620]: E0129 15:03:04.872609 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:05 crc kubenswrapper[4620]: I0129 15:03:05.126969 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:46:23.516004472 +0000 UTC Jan 29 15:03:05 crc kubenswrapper[4620]: I0129 15:03:05.872236 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:05 crc kubenswrapper[4620]: E0129 15:03:05.872908 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:05 crc kubenswrapper[4620]: I0129 15:03:05.872365 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:05 crc kubenswrapper[4620]: E0129 15:03:05.873111 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:05 crc kubenswrapper[4620]: E0129 15:03:05.976032 4620 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:03:06 crc kubenswrapper[4620]: I0129 15:03:06.127295 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:18:17.869751215 +0000 UTC Jan 29 15:03:06 crc kubenswrapper[4620]: I0129 15:03:06.871677 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:06 crc kubenswrapper[4620]: I0129 15:03:06.871677 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:06 crc kubenswrapper[4620]: E0129 15:03:06.871972 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:06 crc kubenswrapper[4620]: E0129 15:03:06.872360 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.128169 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:59:33.773801772 +0000 UTC Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.410110 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.410166 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.410186 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.410211 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.410231 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:07Z","lastTransitionTime":"2026-01-29T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:07 crc kubenswrapper[4620]: E0129 15:03:07.427801 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.432593 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.432660 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.432678 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.432705 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.432721 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:07Z","lastTransitionTime":"2026-01-29T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:07 crc kubenswrapper[4620]: E0129 15:03:07.447395 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.452887 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.452921 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.452933 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.452955 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.452969 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:07Z","lastTransitionTime":"2026-01-29T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:07 crc kubenswrapper[4620]: E0129 15:03:07.470872 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.475017 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.475076 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.475095 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.475120 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.475135 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:07Z","lastTransitionTime":"2026-01-29T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:07 crc kubenswrapper[4620]: E0129 15:03:07.491350 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.496302 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.496392 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.496411 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.496434 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.496447 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:07Z","lastTransitionTime":"2026-01-29T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:07 crc kubenswrapper[4620]: E0129 15:03:07.511443 4620 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:03:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d83a94e9-ca2c-42f8-a4ad-a92cb455696f\\\",\\\"systemUUID\\\":\\\"9d9abc70-469a-4f92-842f-65aa805098a6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:07 crc kubenswrapper[4620]: E0129 15:03:07.511610 4620 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.871708 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:07 crc kubenswrapper[4620]: I0129 15:03:07.871801 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:07 crc kubenswrapper[4620]: E0129 15:03:07.871876 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:07 crc kubenswrapper[4620]: E0129 15:03:07.871948 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:08 crc kubenswrapper[4620]: I0129 15:03:08.129196 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:50:58.153133539 +0000 UTC Jan 29 15:03:08 crc kubenswrapper[4620]: I0129 15:03:08.872085 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:08 crc kubenswrapper[4620]: I0129 15:03:08.872634 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:08 crc kubenswrapper[4620]: E0129 15:03:08.872866 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:08 crc kubenswrapper[4620]: E0129 15:03:08.872965 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:09 crc kubenswrapper[4620]: I0129 15:03:09.129441 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 11:02:49.906165784 +0000 UTC Jan 29 15:03:09 crc kubenswrapper[4620]: I0129 15:03:09.871723 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:09 crc kubenswrapper[4620]: E0129 15:03:09.872036 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:09 crc kubenswrapper[4620]: I0129 15:03:09.871746 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:09 crc kubenswrapper[4620]: E0129 15:03:09.872184 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:10 crc kubenswrapper[4620]: I0129 15:03:10.130704 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:17:43.025005617 +0000 UTC Jan 29 15:03:10 crc kubenswrapper[4620]: I0129 15:03:10.871923 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:10 crc kubenswrapper[4620]: I0129 15:03:10.871999 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:10 crc kubenswrapper[4620]: E0129 15:03:10.872153 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:10 crc kubenswrapper[4620]: E0129 15:03:10.872351 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:10 crc kubenswrapper[4620]: I0129 15:03:10.873332 4620 scope.go:117] "RemoveContainer" containerID="375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9" Jan 29 15:03:10 crc kubenswrapper[4620]: E0129 15:03:10.873536 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" Jan 29 15:03:10 crc kubenswrapper[4620]: I0129 15:03:10.902004 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6ff4d4e-4bcb-4b40-a3b3-f758dd585c12\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d0d37946b4822cdc7359c565e4051fe47e6143635578d8f374f8812c8578884\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f0cc1b3997d3c829f69452799782bb197381cb8be815f777188e16e1adfe4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92f3105ec96740d24a846262f9df0217ec7d314ac07fbe350b501e26cf09a2b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79e5a8b59bcca4ac5f3d97ced941ad84b595be47e88664fc539f24e923d7dded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f12d36335971ff4a6460a0ba01bbdd6e49d59240ccb10c873227c513fb31fa32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://927e0f3fffbe34e4fec1f1a922c5609ddbaec89397d70d4fea427680581018b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://927e0f3fffbe34e4fec1f1a922c5609ddbaec89397d70d4fea427680581018b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d019cc8dc5e4f8a915e56a1809b48339daee0e87cfd488976c5d9df2227635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d019cc8dc5e4f8a915e56a1809b48339daee0e87cfd488976c5d9df2227635\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad37ca55f5c2175aec69507894da9fc93685f4290d233a05d4b375f40e8c8577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad37ca55f5c2175aec69507894da9fc93685f4290d233a05d4b375f40e8c8577\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:10 crc kubenswrapper[4620]: I0129 15:03:10.919047 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb05bef-ef02-4ac8-8d18-4d13ded49beb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:01:27Z\\\",\\\"message\\\":\\\"W0129 15:01:10.278394 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:01:10.278860 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769698870 cert, and key in /tmp/serving-cert-1413589199/serving-signer.crt, /tmp/serving-cert-1413589199/serving-signer.key\\\\nI0129 15:01:10.470200 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:01:20.472873 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 15:01:20.473009 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:01:20.475291 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1413589199/tls.crt::/tmp/serving-cert-1413589199/tls.key\\\\\\\"\\\\nI0129 15:01:26.329132 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:01:26.357322 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:01:26.357363 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:01:26.357396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:01:26.357402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:01:26.375712 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:01:26.375740 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:01:26.375745 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0129 15:01:26.379813 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:01:26.383218 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:10 crc kubenswrapper[4620]: I0129 15:03:10.939587 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45e7ef2d-49d1-4945-b876-3cba2004ea78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99a83a269481601d5d178f7cf8e9ac7c25da28f75ce5553609d4f54e1832d1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d94a2236dffd379e8d043285a68fb2284743449a4e614e1c3a5a5de53c6666f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5268b219a8192c225ba02b2e223ea31a43b01f44c13abe61e98b2b32bc58df8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:10 crc kubenswrapper[4620]: I0129 15:03:10.957128 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a76cce43-3d01-4158-b23a-e21fd5927792\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0bf67e6610aede7cf70169b78d7ad1b2f6fce3c1ba9d250addb73d5f0c2312f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kglgc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7469t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:10 crc kubenswrapper[4620]: E0129 15:03:10.976862 4620 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:03:10 crc kubenswrapper[4620]: I0129 15:03:10.985861 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f995d57b-b546-4226-83f5-3e2c1becec57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8726ce93fdc02c61e0cb178041c92b506db8b8d19de3d9ea70ed78a8e8b1d4b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dea6220a524a7279e94b26de8f42a9ee95f119fd9f2f62d9e077341e5b831689\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd4e0a59e1bc457a695169e08433c88b46999de9a739a571339572f7a3b7176\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fdaffd9a86ef8c010150640d25265898dbc2ffdbbc23c09c831f1c9f164b307\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cbc71ff600638fd808da51b06cd120e195639ce2d87562b7635f054623904e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6eb6e9c4336ac616872e742abccd29f16c5abd6e2125295cbdaf32b2df27fca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b9cf81d1ade72e9bb49b8beff4f5b89b283d45ffb25db95d8451fecc5451f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvp74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hpt9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.002214 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c91978fbe53cd1972621d85c36497037114b11f5e7ddeea8d24192102fc45de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.022093 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.036829 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f4286d139747549b9456ff35cd793a21ba8b65f1b85b389b9c4d5ca2af9940a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dad96e2d45cb8113df24446b46713b98c4710ec629fbe23774e7e084c95f9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.046944 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kqpq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3805ba80-f304-49f1-8e23-e25e0a1ff177\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://136d01b440356f7f476855dc748ef4667209878640df0bf9a7edf470918742e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-422fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kqpq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.057868 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ckzvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93c9842c-403b-4367-a18f-32a8fa8e58de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0aadf5149b2b54c8bbad4fb9bf328d62f321ff3014f8310fe07f6b7e1abd587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6btr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ckzvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.072336 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tlwgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f66b658d-e5ec-445e-9494-0a0062e87c4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:36Z\\\",\\\"message\\\":\\\"2026-01-29T15:01:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb\\\\n2026-01-29T15:01:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c36fd60c-edaa-4c67-8a1a-632bcc16f3cb to /host/opt/cni/bin/\\\\n2026-01-29T15:01:51Z [verbose] multus-daemon started\\\\n2026-01-29T15:01:51Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:02:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs7pp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tlwgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.083445 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"646c0f4c-1b62-4944-a0dc-4db4f86c1f2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a41955055dd921c81e1114d5a36e76e91c96b01286e9c7a76c6118e47cc5b1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4186c1a66f344cc0a9a19078458626591e4f12b34b10bbe01967f3740934a1f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.098531 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23162706-ec4f-4cbf-84e9-9fe8457a9bb4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b48fe3c8cf99838f414125abad56e9e095f210c6568de8bfa35afd892e2c27e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaedaf2a6e68e6e57c475209bc89c1d33b1918f57966669264cf43bafe68917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4485ef8bcb0b192b9e222ac7b09ea814e91f47aa6cda64a54c78aa1e685368fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7fc21f72583153f1084c95ed2cd5e25af3394f890fe912149f212927eff27d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.116573 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.131827 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:12:06.938966916 +0000 UTC Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.135513 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eb327f6b7a70c04d48d3a0d8e5580f57b811bf604d5622983ac4c67b45de948\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.148331 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39626c5c-59f0-466e-81f3-b434bae72182\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a59dc0c12e9642f7328364f31ede879b1be0838d0fa58b5f9e950d733fd3ed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89a1ec1f3ab3e725ed56aca70af1a94531a711e10a4661f48b020ac8206d3be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nhpr6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-755xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.160607 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-twqvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82634d3f-d985-4384-bd37-426d509d4e57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw4k7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-twqvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.176218 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.199452 4620 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa9cbed4-05b4-48af-81c2-9f8903dc765e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:01:32Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:02:47Z\\\",\\\"message\\\":\\\"machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.16\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0129 15:02:47.590430 6753 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:02:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:01:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bbvrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:01:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ks4d9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:03:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.871831 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:11 crc kubenswrapper[4620]: I0129 15:03:11.871893 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:11 crc kubenswrapper[4620]: E0129 15:03:11.872005 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:11 crc kubenswrapper[4620]: E0129 15:03:11.872112 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:12 crc kubenswrapper[4620]: I0129 15:03:12.133136 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:33:30.225790745 +0000 UTC Jan 29 15:03:12 crc kubenswrapper[4620]: I0129 15:03:12.871872 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:12 crc kubenswrapper[4620]: I0129 15:03:12.871872 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:12 crc kubenswrapper[4620]: E0129 15:03:12.872674 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:12 crc kubenswrapper[4620]: E0129 15:03:12.872784 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:13 crc kubenswrapper[4620]: I0129 15:03:13.133844 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:15:09.920198473 +0000 UTC Jan 29 15:03:13 crc kubenswrapper[4620]: I0129 15:03:13.872555 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:13 crc kubenswrapper[4620]: I0129 15:03:13.872574 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:13 crc kubenswrapper[4620]: E0129 15:03:13.872810 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:13 crc kubenswrapper[4620]: E0129 15:03:13.872933 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:14 crc kubenswrapper[4620]: I0129 15:03:14.134819 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:20:45.88467723 +0000 UTC Jan 29 15:03:14 crc kubenswrapper[4620]: I0129 15:03:14.871704 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:14 crc kubenswrapper[4620]: I0129 15:03:14.872168 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:14 crc kubenswrapper[4620]: E0129 15:03:14.872503 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:14 crc kubenswrapper[4620]: E0129 15:03:14.872710 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:15 crc kubenswrapper[4620]: I0129 15:03:15.136526 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 22:14:28.14605891 +0000 UTC Jan 29 15:03:15 crc kubenswrapper[4620]: I0129 15:03:15.872175 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:15 crc kubenswrapper[4620]: I0129 15:03:15.872188 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:15 crc kubenswrapper[4620]: E0129 15:03:15.872417 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:15 crc kubenswrapper[4620]: E0129 15:03:15.872324 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:15 crc kubenswrapper[4620]: E0129 15:03:15.978287 4620 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:03:16 crc kubenswrapper[4620]: I0129 15:03:16.137274 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:18:26.942233164 +0000 UTC Jan 29 15:03:16 crc kubenswrapper[4620]: I0129 15:03:16.872104 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:16 crc kubenswrapper[4620]: E0129 15:03:16.872314 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:16 crc kubenswrapper[4620]: I0129 15:03:16.872121 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:16 crc kubenswrapper[4620]: E0129 15:03:16.872922 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.137969 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:05:21.93569103 +0000 UTC Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.866393 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.866481 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.866506 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.866538 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.866561 4620 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:03:17Z","lastTransitionTime":"2026-01-29T15:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.871457 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.871475 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:17 crc kubenswrapper[4620]: E0129 15:03:17.871711 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:17 crc kubenswrapper[4620]: E0129 15:03:17.871902 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.936315 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22"] Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.937157 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.940575 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.940850 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.941037 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 15:03:17 crc kubenswrapper[4620]: I0129 15:03:17.942633 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:17.999892 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=97.999868165 podStartE2EDuration="1m37.999868165s" podCreationTimestamp="2026-01-29 15:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:17.999746221 +0000 UTC m=+138.612573896" watchObservedRunningTime="2026-01-29 15:03:17.999868165 +0000 UTC m=+138.612695830" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.000244 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=38.000235746 podStartE2EDuration="38.000235746s" podCreationTimestamp="2026-01-29 15:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:17.979821633 +0000 UTC m=+138.592649288" watchObservedRunningTime="2026-01-29 15:03:18.000235746 +0000 UTC m=+138.613063411" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.017836 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=106.0178167 podStartE2EDuration="1m46.0178167s" podCreationTimestamp="2026-01-29 15:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:18.017586042 +0000 UTC m=+138.630413697" watchObservedRunningTime="2026-01-29 15:03:18.0178167 +0000 UTC m=+138.630644355" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.031548 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podStartSLOduration=111.031528392 podStartE2EDuration="1m51.031528392s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:18.030333765 +0000 UTC m=+138.643161420" watchObservedRunningTime="2026-01-29 15:03:18.031528392 +0000 UTC m=+138.644356047" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.048935 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6397c036-c0ac-4d61-94e9-b3679cba23d2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.049028 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6397c036-c0ac-4d61-94e9-b3679cba23d2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.049079 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6397c036-c0ac-4d61-94e9-b3679cba23d2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.049137 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6397c036-c0ac-4d61-94e9-b3679cba23d2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.049218 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6397c036-c0ac-4d61-94e9-b3679cba23d2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.050019 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-hpt9v" podStartSLOduration=111.049976974 podStartE2EDuration="1m51.049976974s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:18.049712895 +0000 UTC m=+138.662540550" watchObservedRunningTime="2026-01-29 15:03:18.049976974 +0000 UTC m=+138.662804629" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.063884 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-tlwgt" podStartSLOduration=111.063858501 podStartE2EDuration="1m51.063858501s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:18.063025425 +0000 UTC m=+138.675853110" watchObservedRunningTime="2026-01-29 15:03:18.063858501 +0000 UTC m=+138.676686146" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.130495 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-kqpq8" podStartSLOduration=111.13047186 podStartE2EDuration="1m51.13047186s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:18.120367322 +0000 UTC m=+138.733194967" watchObservedRunningTime="2026-01-29 15:03:18.13047186 +0000 UTC m=+138.743299505" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.138126 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 16:01:21.165668679 +0000 UTC Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.138205 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.140341 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ckzvr" podStartSLOduration=111.140320661 podStartE2EDuration="1m51.140320661s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:18.130336936 +0000 UTC m=+138.743164591" watchObservedRunningTime="2026-01-29 15:03:18.140320661 +0000 UTC m=+138.753148306" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.145087 4620 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.149846 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6397c036-c0ac-4d61-94e9-b3679cba23d2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.150041 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6397c036-c0ac-4d61-94e9-b3679cba23d2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.149974 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6397c036-c0ac-4d61-94e9-b3679cba23d2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.150120 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6397c036-c0ac-4d61-94e9-b3679cba23d2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.150267 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6397c036-c0ac-4d61-94e9-b3679cba23d2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.150307 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6397c036-c0ac-4d61-94e9-b3679cba23d2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.150455 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6397c036-c0ac-4d61-94e9-b3679cba23d2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.150862 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6397c036-c0ac-4d61-94e9-b3679cba23d2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.156419 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6397c036-c0ac-4d61-94e9-b3679cba23d2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.164455 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=71.164433791 podStartE2EDuration="1m11.164433791s" podCreationTimestamp="2026-01-29 15:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:18.15298978 +0000 UTC m=+138.765817425" watchObservedRunningTime="2026-01-29 15:03:18.164433791 +0000 UTC m=+138.777261436" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.167960 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6397c036-c0ac-4d61-94e9-b3679cba23d2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h8q22\" (UID: \"6397c036-c0ac-4d61-94e9-b3679cba23d2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.180215 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=85.180194547 podStartE2EDuration="1m25.180194547s" podCreationTimestamp="2026-01-29 15:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:18.165512855 +0000 UTC m=+138.778340500" watchObservedRunningTime="2026-01-29 15:03:18.180194547 +0000 UTC m=+138.793022192" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.202034 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-755xr" podStartSLOduration=110.202015196 podStartE2EDuration="1m50.202015196s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:18.201335424 +0000 UTC m=+138.814163079" watchObservedRunningTime="2026-01-29 15:03:18.202015196 +0000 UTC m=+138.814842851" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.252240 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" Jan 29 15:03:18 crc kubenswrapper[4620]: W0129 15:03:18.266524 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6397c036_c0ac_4d61_94e9_b3679cba23d2.slice/crio-43344c493dcef9aa98e18daa4da0dd55bba972e041e72aa6414f50adca173386 WatchSource:0}: Error finding container 43344c493dcef9aa98e18daa4da0dd55bba972e041e72aa6414f50adca173386: Status 404 returned error can't find the container with id 43344c493dcef9aa98e18daa4da0dd55bba972e041e72aa6414f50adca173386 Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.669744 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" event={"ID":"6397c036-c0ac-4d61-94e9-b3679cba23d2","Type":"ContainerStarted","Data":"0901bbbbbc4d44cfee666f0fb9b89d1d1fc54007db7c6a03576045c5b3c4613d"} Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.670609 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" event={"ID":"6397c036-c0ac-4d61-94e9-b3679cba23d2","Type":"ContainerStarted","Data":"43344c493dcef9aa98e18daa4da0dd55bba972e041e72aa6414f50adca173386"} Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.689592 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8q22" podStartSLOduration=111.68957121 podStartE2EDuration="1m51.68957121s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:18.688101504 +0000 UTC m=+139.300929189" watchObservedRunningTime="2026-01-29 15:03:18.68957121 +0000 UTC m=+139.302398865" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.871927 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:18 crc kubenswrapper[4620]: I0129 15:03:18.872032 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:18 crc kubenswrapper[4620]: E0129 15:03:18.872414 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:18 crc kubenswrapper[4620]: E0129 15:03:18.872638 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:19 crc kubenswrapper[4620]: I0129 15:03:19.871423 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:19 crc kubenswrapper[4620]: E0129 15:03:19.871572 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:19 crc kubenswrapper[4620]: I0129 15:03:19.871847 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:19 crc kubenswrapper[4620]: E0129 15:03:19.871923 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:20 crc kubenswrapper[4620]: I0129 15:03:20.872564 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:20 crc kubenswrapper[4620]: I0129 15:03:20.872600 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:20 crc kubenswrapper[4620]: E0129 15:03:20.873542 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:20 crc kubenswrapper[4620]: E0129 15:03:20.873645 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:20 crc kubenswrapper[4620]: E0129 15:03:20.978952 4620 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:03:21 crc kubenswrapper[4620]: I0129 15:03:21.871710 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:21 crc kubenswrapper[4620]: I0129 15:03:21.871710 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:21 crc kubenswrapper[4620]: E0129 15:03:21.871854 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:21 crc kubenswrapper[4620]: E0129 15:03:21.871925 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:22 crc kubenswrapper[4620]: I0129 15:03:22.871811 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:22 crc kubenswrapper[4620]: I0129 15:03:22.871832 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:22 crc kubenswrapper[4620]: E0129 15:03:22.871946 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:22 crc kubenswrapper[4620]: E0129 15:03:22.872016 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:23 crc kubenswrapper[4620]: I0129 15:03:23.872330 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:23 crc kubenswrapper[4620]: I0129 15:03:23.872394 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:23 crc kubenswrapper[4620]: E0129 15:03:23.872468 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:23 crc kubenswrapper[4620]: E0129 15:03:23.872704 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:23 crc kubenswrapper[4620]: I0129 15:03:23.874133 4620 scope.go:117] "RemoveContainer" containerID="375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9" Jan 29 15:03:23 crc kubenswrapper[4620]: E0129 15:03:23.874401 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ks4d9_openshift-ovn-kubernetes(fa9cbed4-05b4-48af-81c2-9f8903dc765e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" Jan 29 15:03:24 crc kubenswrapper[4620]: I0129 15:03:24.690489 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tlwgt_f66b658d-e5ec-445e-9494-0a0062e87c4c/kube-multus/1.log" Jan 29 15:03:24 crc kubenswrapper[4620]: I0129 15:03:24.691328 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tlwgt_f66b658d-e5ec-445e-9494-0a0062e87c4c/kube-multus/0.log" Jan 29 15:03:24 crc kubenswrapper[4620]: I0129 15:03:24.691404 4620 generic.go:334] "Generic (PLEG): container finished" podID="f66b658d-e5ec-445e-9494-0a0062e87c4c" containerID="ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d" exitCode=1 Jan 29 15:03:24 crc kubenswrapper[4620]: I0129 15:03:24.691449 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tlwgt" event={"ID":"f66b658d-e5ec-445e-9494-0a0062e87c4c","Type":"ContainerDied","Data":"ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d"} Jan 29 15:03:24 crc kubenswrapper[4620]: I0129 15:03:24.691494 4620 scope.go:117] "RemoveContainer" containerID="45eb365783e99496c290c69228fc342482d38c579961af3b16bce6f9b5af8c9b" Jan 29 15:03:24 crc kubenswrapper[4620]: I0129 15:03:24.692647 4620 scope.go:117] "RemoveContainer" containerID="ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d" Jan 29 15:03:24 crc kubenswrapper[4620]: E0129 15:03:24.693050 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-tlwgt_openshift-multus(f66b658d-e5ec-445e-9494-0a0062e87c4c)\"" pod="openshift-multus/multus-tlwgt" podUID="f66b658d-e5ec-445e-9494-0a0062e87c4c" Jan 29 15:03:24 crc kubenswrapper[4620]: I0129 15:03:24.872479 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:24 crc kubenswrapper[4620]: I0129 15:03:24.872479 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:24 crc kubenswrapper[4620]: E0129 15:03:24.872614 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:24 crc kubenswrapper[4620]: E0129 15:03:24.872858 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:25 crc kubenswrapper[4620]: I0129 15:03:25.696808 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tlwgt_f66b658d-e5ec-445e-9494-0a0062e87c4c/kube-multus/1.log" Jan 29 15:03:25 crc kubenswrapper[4620]: I0129 15:03:25.871946 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:25 crc kubenswrapper[4620]: I0129 15:03:25.872014 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:25 crc kubenswrapper[4620]: E0129 15:03:25.872079 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:25 crc kubenswrapper[4620]: E0129 15:03:25.872242 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:25 crc kubenswrapper[4620]: E0129 15:03:25.980590 4620 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:03:26 crc kubenswrapper[4620]: I0129 15:03:26.872294 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:26 crc kubenswrapper[4620]: E0129 15:03:26.872463 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:26 crc kubenswrapper[4620]: I0129 15:03:26.872812 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:26 crc kubenswrapper[4620]: E0129 15:03:26.873160 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:27 crc kubenswrapper[4620]: I0129 15:03:27.871711 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:27 crc kubenswrapper[4620]: E0129 15:03:27.871998 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:27 crc kubenswrapper[4620]: I0129 15:03:27.871737 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:27 crc kubenswrapper[4620]: E0129 15:03:27.872379 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:28 crc kubenswrapper[4620]: I0129 15:03:28.871889 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:28 crc kubenswrapper[4620]: E0129 15:03:28.872052 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:28 crc kubenswrapper[4620]: I0129 15:03:28.872121 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:28 crc kubenswrapper[4620]: E0129 15:03:28.872201 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:29 crc kubenswrapper[4620]: I0129 15:03:29.871701 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:29 crc kubenswrapper[4620]: I0129 15:03:29.871901 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:29 crc kubenswrapper[4620]: E0129 15:03:29.871904 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:29 crc kubenswrapper[4620]: E0129 15:03:29.872068 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:30 crc kubenswrapper[4620]: I0129 15:03:30.872047 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:30 crc kubenswrapper[4620]: I0129 15:03:30.873241 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:30 crc kubenswrapper[4620]: E0129 15:03:30.874962 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:30 crc kubenswrapper[4620]: E0129 15:03:30.875350 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:30 crc kubenswrapper[4620]: E0129 15:03:30.981451 4620 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:03:31 crc kubenswrapper[4620]: I0129 15:03:31.871878 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:31 crc kubenswrapper[4620]: I0129 15:03:31.871948 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:31 crc kubenswrapper[4620]: E0129 15:03:31.872135 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:31 crc kubenswrapper[4620]: E0129 15:03:31.872498 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:32 crc kubenswrapper[4620]: I0129 15:03:32.872333 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:32 crc kubenswrapper[4620]: E0129 15:03:32.872506 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:32 crc kubenswrapper[4620]: I0129 15:03:32.872347 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:32 crc kubenswrapper[4620]: E0129 15:03:32.872794 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:33 crc kubenswrapper[4620]: I0129 15:03:33.872309 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:33 crc kubenswrapper[4620]: I0129 15:03:33.872383 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:33 crc kubenswrapper[4620]: E0129 15:03:33.872546 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:33 crc kubenswrapper[4620]: E0129 15:03:33.872686 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:34 crc kubenswrapper[4620]: I0129 15:03:34.872568 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:34 crc kubenswrapper[4620]: I0129 15:03:34.872568 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:34 crc kubenswrapper[4620]: E0129 15:03:34.873617 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:34 crc kubenswrapper[4620]: E0129 15:03:34.874535 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:35 crc kubenswrapper[4620]: I0129 15:03:35.872480 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:35 crc kubenswrapper[4620]: I0129 15:03:35.872672 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:35 crc kubenswrapper[4620]: E0129 15:03:35.872796 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:35 crc kubenswrapper[4620]: E0129 15:03:35.873161 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:35 crc kubenswrapper[4620]: I0129 15:03:35.874147 4620 scope.go:117] "RemoveContainer" containerID="375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9" Jan 29 15:03:35 crc kubenswrapper[4620]: E0129 15:03:35.983122 4620 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:03:36 crc kubenswrapper[4620]: I0129 15:03:36.732620 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/3.log" Jan 29 15:03:36 crc kubenswrapper[4620]: I0129 15:03:36.735048 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerStarted","Data":"873f8b03c9f47305fc2108ee369b2291ef0cc1e17757d1d8933c564550177d55"} Jan 29 15:03:36 crc kubenswrapper[4620]: I0129 15:03:36.735458 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:03:36 crc kubenswrapper[4620]: I0129 15:03:36.760514 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podStartSLOduration=129.760499601 podStartE2EDuration="2m9.760499601s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:36.759773958 +0000 UTC m=+157.372601603" watchObservedRunningTime="2026-01-29 15:03:36.760499601 +0000 UTC m=+157.373327246" Jan 29 15:03:36 crc kubenswrapper[4620]: I0129 15:03:36.872278 4620 scope.go:117] "RemoveContainer" containerID="ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d" Jan 29 15:03:36 crc kubenswrapper[4620]: I0129 15:03:36.872912 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:36 crc kubenswrapper[4620]: E0129 15:03:36.872987 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:36 crc kubenswrapper[4620]: I0129 15:03:36.873064 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:36 crc kubenswrapper[4620]: E0129 15:03:36.873236 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:36 crc kubenswrapper[4620]: I0129 15:03:36.976231 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-twqvf"] Jan 29 15:03:36 crc kubenswrapper[4620]: I0129 15:03:36.976368 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:36 crc kubenswrapper[4620]: E0129 15:03:36.976476 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:37 crc kubenswrapper[4620]: I0129 15:03:37.740069 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tlwgt_f66b658d-e5ec-445e-9494-0a0062e87c4c/kube-multus/1.log" Jan 29 15:03:37 crc kubenswrapper[4620]: I0129 15:03:37.740161 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tlwgt" event={"ID":"f66b658d-e5ec-445e-9494-0a0062e87c4c","Type":"ContainerStarted","Data":"8e9ea767b98760df113964c83b8aa3b0a3171651da218d1ca361f2f91ef91add"} Jan 29 15:03:37 crc kubenswrapper[4620]: I0129 15:03:37.871440 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:37 crc kubenswrapper[4620]: E0129 15:03:37.871589 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:38 crc kubenswrapper[4620]: I0129 15:03:38.871704 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:38 crc kubenswrapper[4620]: I0129 15:03:38.871748 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:38 crc kubenswrapper[4620]: E0129 15:03:38.872401 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:38 crc kubenswrapper[4620]: E0129 15:03:38.872470 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:38 crc kubenswrapper[4620]: I0129 15:03:38.872966 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:38 crc kubenswrapper[4620]: E0129 15:03:38.873144 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:39 crc kubenswrapper[4620]: I0129 15:03:39.872422 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:39 crc kubenswrapper[4620]: E0129 15:03:39.872629 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:03:40 crc kubenswrapper[4620]: I0129 15:03:40.871604 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:40 crc kubenswrapper[4620]: I0129 15:03:40.871694 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:40 crc kubenswrapper[4620]: I0129 15:03:40.873784 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:40 crc kubenswrapper[4620]: E0129 15:03:40.873773 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:03:40 crc kubenswrapper[4620]: E0129 15:03:40.873904 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-twqvf" podUID="82634d3f-d985-4384-bd37-426d509d4e57" Jan 29 15:03:40 crc kubenswrapper[4620]: E0129 15:03:40.874026 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:03:41 crc kubenswrapper[4620]: I0129 15:03:41.210893 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:41 crc kubenswrapper[4620]: I0129 15:03:41.211085 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:41 crc kubenswrapper[4620]: E0129 15:03:41.211321 4620 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:03:41 crc kubenswrapper[4620]: E0129 15:03:41.211427 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:05:43.211401942 +0000 UTC m=+283.824229627 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:03:41 crc kubenswrapper[4620]: E0129 15:03:41.211551 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:05:43.211519225 +0000 UTC m=+283.824346870 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:41 crc kubenswrapper[4620]: I0129 15:03:41.716424 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:41 crc kubenswrapper[4620]: I0129 15:03:41.716467 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:41 crc kubenswrapper[4620]: I0129 15:03:41.716493 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:41 crc kubenswrapper[4620]: E0129 15:03:41.716624 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:03:41 crc kubenswrapper[4620]: E0129 15:03:41.716642 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:03:41 crc kubenswrapper[4620]: E0129 15:03:41.716652 4620 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:03:41 crc kubenswrapper[4620]: E0129 15:03:41.716699 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:05:43.716686764 +0000 UTC m=+284.329514399 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:03:41 crc kubenswrapper[4620]: E0129 15:03:41.716704 4620 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:03:41 crc kubenswrapper[4620]: E0129 15:03:41.716888 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:05:43.716836939 +0000 UTC m=+284.329664584 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:03:41 crc kubenswrapper[4620]: E0129 15:03:41.716978 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:03:41 crc kubenswrapper[4620]: E0129 15:03:41.717000 4620 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:03:41 crc kubenswrapper[4620]: E0129 15:03:41.717017 4620 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:03:41 crc kubenswrapper[4620]: E0129 15:03:41.717062 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:05:43.717051996 +0000 UTC m=+284.329879721 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:03:41 crc kubenswrapper[4620]: I0129 15:03:41.872197 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:03:41 crc kubenswrapper[4620]: I0129 15:03:41.874181 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 15:03:41 crc kubenswrapper[4620]: I0129 15:03:41.874226 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 15:03:42 crc kubenswrapper[4620]: I0129 15:03:42.871900 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:03:42 crc kubenswrapper[4620]: I0129 15:03:42.871928 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:42 crc kubenswrapper[4620]: I0129 15:03:42.872146 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:03:42 crc kubenswrapper[4620]: I0129 15:03:42.875416 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 15:03:42 crc kubenswrapper[4620]: I0129 15:03:42.875414 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 15:03:42 crc kubenswrapper[4620]: I0129 15:03:42.875908 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 15:03:42 crc kubenswrapper[4620]: I0129 15:03:42.876071 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.121038 4620 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.174133 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.174526 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.182237 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.182684 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.182695 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.182826 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.183646 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.183713 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.186972 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwlxb\" (UniqueName: \"kubernetes.io/projected/58f39c40-ed87-43dd-90d0-d892d4a56375-kube-api-access-pwlxb\") pod \"route-controller-manager-6576b87f9c-phzrb\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.187055 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58f39c40-ed87-43dd-90d0-d892d4a56375-config\") pod \"route-controller-manager-6576b87f9c-phzrb\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.187189 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58f39c40-ed87-43dd-90d0-d892d4a56375-client-ca\") pod \"route-controller-manager-6576b87f9c-phzrb\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.187296 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58f39c40-ed87-43dd-90d0-d892d4a56375-serving-cert\") pod \"route-controller-manager-6576b87f9c-phzrb\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.194425 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.197859 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.203826 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x75k8"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.205191 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.213993 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8h784"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.214422 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.214628 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.215032 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gt9s9"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.215674 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.217225 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.217730 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.218924 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.218973 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.218940 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.221822 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-z57gf"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.222220 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.222667 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.222924 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.224916 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.224953 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.226800 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2j567"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.227180 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.228333 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-gwr5k"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.228775 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gwr5k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.229425 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.229621 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.230534 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.230684 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.230837 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.231076 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.238824 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.238989 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.239066 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.239484 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.241645 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-swx9b"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.242231 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.242574 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.242894 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.243490 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.243700 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.243746 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.244184 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.244454 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.244684 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.257279 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.259007 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.259392 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.259584 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.259816 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.260819 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.261033 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.261216 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.261407 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.261537 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.261688 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.263896 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8grgt"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.261996 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.262035 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.262080 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.270719 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8q7cm"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.271372 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8grgt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.272121 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.272645 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.272871 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.273211 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.295300 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5ppqg"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.297440 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sqrtn"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.299063 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.300361 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5lxk8"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.303469 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.295391 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.273394 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.273445 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.274430 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.274483 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.274556 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.274788 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.274905 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.275000 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.275077 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.275132 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.277091 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.341572 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.341810 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.342909 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.344583 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.349103 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwlxb\" (UniqueName: \"kubernetes.io/projected/58f39c40-ed87-43dd-90d0-d892d4a56375-kube-api-access-pwlxb\") pod \"route-controller-manager-6576b87f9c-phzrb\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.349137 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b6421d4-a97f-4867-a78c-50ba4d6486ea-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qpx6x\" (UID: \"9b6421d4-a97f-4867-a78c-50ba4d6486ea\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.349169 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58f39c40-ed87-43dd-90d0-d892d4a56375-config\") pod \"route-controller-manager-6576b87f9c-phzrb\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.349189 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58f39c40-ed87-43dd-90d0-d892d4a56375-client-ca\") pod \"route-controller-manager-6576b87f9c-phzrb\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.349224 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58f39c40-ed87-43dd-90d0-d892d4a56375-serving-cert\") pod \"route-controller-manager-6576b87f9c-phzrb\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.349247 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhz5z\" (UniqueName: \"kubernetes.io/projected/9b6421d4-a97f-4867-a78c-50ba4d6486ea-kube-api-access-rhz5z\") pod \"cluster-samples-operator-665b6dd947-qpx6x\" (UID: \"9b6421d4-a97f-4867-a78c-50ba4d6486ea\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.349472 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.349978 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.350500 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.351290 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58f39c40-ed87-43dd-90d0-d892d4a56375-config\") pod \"route-controller-manager-6576b87f9c-phzrb\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.352083 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.353326 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58f39c40-ed87-43dd-90d0-d892d4a56375-client-ca\") pod \"route-controller-manager-6576b87f9c-phzrb\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.353456 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5lxk8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.357043 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.357165 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.357246 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.357320 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.357385 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.357447 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.357516 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.357588 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.357675 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.357789 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.357864 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.357931 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.357994 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.358118 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.358296 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.358424 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.358666 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.358799 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.359316 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.361722 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.361934 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.373184 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.374721 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.361976 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362013 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362245 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362284 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362316 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362342 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362521 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362584 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362732 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362739 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362786 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362836 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362860 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362877 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362927 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.362964 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.363028 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.363162 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.363401 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.363492 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.376086 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.363506 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.363543 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.363812 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.369622 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.370768 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.371557 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.379493 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58f39c40-ed87-43dd-90d0-d892d4a56375-serving-cert\") pod \"route-controller-manager-6576b87f9c-phzrb\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.379637 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.379727 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.386224 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.389027 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.389462 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.390017 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.390123 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.397116 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.412045 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.413192 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.415903 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.418559 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.418674 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.419537 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.423047 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.439575 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.440293 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.446941 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.447584 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.449740 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-config\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.449910 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.449939 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.449962 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhz5z\" (UniqueName: \"kubernetes.io/projected/9b6421d4-a97f-4867-a78c-50ba4d6486ea-kube-api-access-rhz5z\") pod \"cluster-samples-operator-665b6dd947-qpx6x\" (UID: \"9b6421d4-a97f-4867-a78c-50ba4d6486ea\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.449979 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.449996 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450012 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450029 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-node-pullsecrets\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450045 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-etcd-serving-ca\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450060 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b741ee63-8420-4991-9682-69a55770d9c6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-sjk2k\" (UID: \"b741ee63-8420-4991-9682-69a55770d9c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450075 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-etcd-client\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450091 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-audit\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450105 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rghwj\" (UniqueName: \"kubernetes.io/projected/8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5-kube-api-access-rghwj\") pod \"downloads-7954f5f757-gwr5k\" (UID: \"8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5\") " pod="openshift-console/downloads-7954f5f757-gwr5k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450120 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-trusted-ca-bundle\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450136 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450152 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b741ee63-8420-4991-9682-69a55770d9c6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-sjk2k\" (UID: \"b741ee63-8420-4991-9682-69a55770d9c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450168 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37d0c18f-2c13-4cb8-8523-853e305dfa47-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450184 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02381b1b-463c-4532-a690-deee86ffc674-config\") pod \"machine-api-operator-5694c8668f-swx9b\" (UID: \"02381b1b-463c-4532-a690-deee86ffc674\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450200 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4b55\" (UniqueName: \"kubernetes.io/projected/e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9-kube-api-access-t4b55\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmpbk\" (UID: \"e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450216 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-audit-policies\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450231 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d0c18f-2c13-4cb8-8523-853e305dfa47-config\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450254 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-oauth-config\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450270 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wldms\" (UniqueName: \"kubernetes.io/projected/bfd69ce1-51c3-44b0-81e5-576a633bff91-kube-api-access-wldms\") pod \"machine-approver-56656f9798-tgs4s\" (UID: \"bfd69ce1-51c3-44b0-81e5-576a633bff91\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450287 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d96dcf-b094-485c-8636-401ccc71e918-serving-cert\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450567 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc6f4802-99e9-4a10-bdfb-f132a81023eb-serving-cert\") pod \"console-operator-58897d9998-8h784\" (UID: \"bc6f4802-99e9-4a10-bdfb-f132a81023eb\") " pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450600 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc6f4802-99e9-4a10-bdfb-f132a81023eb-trusted-ca\") pod \"console-operator-58897d9998-8h784\" (UID: \"bc6f4802-99e9-4a10-bdfb-f132a81023eb\") " pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450619 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-encryption-config\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450653 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-976x2\" (UniqueName: \"kubernetes.io/projected/02e136c3-8346-4d92-bc4f-fbee60798447-kube-api-access-976x2\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwkt\" (UID: \"02e136c3-8346-4d92-bc4f-fbee60798447\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.450835 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451066 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02e136c3-8346-4d92-bc4f-fbee60798447-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwkt\" (UID: \"02e136c3-8346-4d92-bc4f-fbee60798447\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451194 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451238 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-serving-cert\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451273 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-config\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451308 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-audit-policies\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451343 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfd69ce1-51c3-44b0-81e5-576a633bff91-auth-proxy-config\") pod \"machine-approver-56656f9798-tgs4s\" (UID: \"bfd69ce1-51c3-44b0-81e5-576a633bff91\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451388 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfd69ce1-51c3-44b0-81e5-576a633bff91-config\") pod \"machine-approver-56656f9798-tgs4s\" (UID: \"bfd69ce1-51c3-44b0-81e5-576a633bff91\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451427 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmpbk\" (UID: \"e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451454 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-oauth-serving-cert\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451484 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lgkk\" (UniqueName: \"kubernetes.io/projected/aa662f18-6ab4-43b8-8e65-8de41043b74d-kube-api-access-2lgkk\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451570 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e24af32-2b91-4772-b26e-a683ba5c3d16-serving-cert\") pod \"openshift-config-operator-7777fb866f-qlbvc\" (UID: \"4e24af32-2b91-4772-b26e-a683ba5c3d16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451612 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-image-import-ca\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451629 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451649 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451681 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451719 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e787f3bd-fa49-4961-9556-f5f0f25fca06-metrics-tls\") pod \"dns-operator-744455d44c-8grgt\" (UID: \"e787f3bd-fa49-4961-9556-f5f0f25fca06\") " pod="openshift-dns-operator/dns-operator-744455d44c-8grgt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451777 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-audit-dir\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451819 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-config\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451842 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v569s\" (UniqueName: \"kubernetes.io/projected/f9d96dcf-b094-485c-8636-401ccc71e918-kube-api-access-v569s\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.451898 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-audit-dir\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.452049 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.452140 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.452220 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-audit-dir\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.452341 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6f4802-99e9-4a10-bdfb-f132a81023eb-config\") pod \"console-operator-58897d9998-8h784\" (UID: \"bc6f4802-99e9-4a10-bdfb-f132a81023eb\") " pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.452444 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02e136c3-8346-4d92-bc4f-fbee60798447-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwkt\" (UID: \"02e136c3-8346-4d92-bc4f-fbee60798447\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.452675 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-serving-cert\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.452733 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-client-ca\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.452802 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxznq\" (UniqueName: \"kubernetes.io/projected/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-kube-api-access-kxznq\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.452858 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g92lg\" (UniqueName: \"kubernetes.io/projected/4e24af32-2b91-4772-b26e-a683ba5c3d16-kube-api-access-g92lg\") pod \"openshift-config-operator-7777fb866f-qlbvc\" (UID: \"4e24af32-2b91-4772-b26e-a683ba5c3d16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.452921 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgtd6\" (UniqueName: \"kubernetes.io/projected/bc6f4802-99e9-4a10-bdfb-f132a81023eb-kube-api-access-pgtd6\") pod \"console-operator-58897d9998-8h784\" (UID: \"bc6f4802-99e9-4a10-bdfb-f132a81023eb\") " pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.453052 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85gmx\" (UniqueName: \"kubernetes.io/projected/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-kube-api-access-85gmx\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.453171 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b741ee63-8420-4991-9682-69a55770d9c6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-sjk2k\" (UID: \"b741ee63-8420-4991-9682-69a55770d9c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.453247 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwn95\" (UniqueName: \"kubernetes.io/projected/b741ee63-8420-4991-9682-69a55770d9c6-kube-api-access-kwn95\") pod \"cluster-image-registry-operator-dc59b4c8b-sjk2k\" (UID: \"b741ee63-8420-4991-9682-69a55770d9c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.453329 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37d0c18f-2c13-4cb8-8523-853e305dfa47-service-ca-bundle\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.453404 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmpbk\" (UID: \"e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.453478 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-serving-cert\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.453551 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b6421d4-a97f-4867-a78c-50ba4d6486ea-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qpx6x\" (UID: \"9b6421d4-a97f-4867-a78c-50ba4d6486ea\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.453624 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/02381b1b-463c-4532-a690-deee86ffc674-images\") pod \"machine-api-operator-5694c8668f-swx9b\" (UID: \"02381b1b-463c-4532-a690-deee86ffc674\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.453709 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-etcd-client\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.453801 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfwvq\" (UniqueName: \"kubernetes.io/projected/37d0c18f-2c13-4cb8-8523-853e305dfa47-kube-api-access-qfwvq\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.453882 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4e24af32-2b91-4772-b26e-a683ba5c3d16-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qlbvc\" (UID: \"4e24af32-2b91-4772-b26e-a683ba5c3d16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.453957 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-service-ca\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.454026 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4jww\" (UniqueName: \"kubernetes.io/projected/e787f3bd-fa49-4961-9556-f5f0f25fca06-kube-api-access-d4jww\") pod \"dns-operator-744455d44c-8grgt\" (UID: \"e787f3bd-fa49-4961-9556-f5f0f25fca06\") " pod="openshift-dns-operator/dns-operator-744455d44c-8grgt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.454093 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37d0c18f-2c13-4cb8-8523-853e305dfa47-serving-cert\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.454157 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/02381b1b-463c-4532-a690-deee86ffc674-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-swx9b\" (UID: \"02381b1b-463c-4532-a690-deee86ffc674\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.454226 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.454299 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdgff\" (UniqueName: \"kubernetes.io/projected/02381b1b-463c-4532-a690-deee86ffc674-kube-api-access-wdgff\") pod \"machine-api-operator-5694c8668f-swx9b\" (UID: \"02381b1b-463c-4532-a690-deee86ffc674\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.454368 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-encryption-config\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.454510 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.454587 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npghp\" (UniqueName: \"kubernetes.io/projected/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-kube-api-access-npghp\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.454654 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bfd69ce1-51c3-44b0-81e5-576a633bff91-machine-approver-tls\") pod \"machine-approver-56656f9798-tgs4s\" (UID: \"bfd69ce1-51c3-44b0-81e5-576a633bff91\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.454719 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.454811 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.456659 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.457719 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9b6421d4-a97f-4867-a78c-50ba4d6486ea-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qpx6x\" (UID: \"9b6421d4-a97f-4867-a78c-50ba4d6486ea\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.461256 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gdsbs"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.462546 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.462883 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.465322 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tzmcd"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.466037 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ncw89"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.467529 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ncw89" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.467818 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.469936 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.470431 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.472644 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.473979 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.474816 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.476148 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.476806 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zsh8m"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.477524 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.478170 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.478193 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.478200 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gt9s9"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.479179 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x75k8"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.480372 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.481132 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8h784"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.481258 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.482427 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.485028 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-z57gf"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.486445 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.487557 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-swx9b"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.490103 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.491529 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bgtck"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.492486 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bgtck" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.494438 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.497008 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.497193 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.498582 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5lxk8"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.499636 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8grgt"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.500364 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jvvlj"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.501030 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jvvlj" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.502413 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8q7cm"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.504167 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.510728 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.512326 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2j567"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.513647 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.515526 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.516391 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.517609 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.518013 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.519489 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sqrtn"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.520543 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ncw89"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.521528 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zsh8m"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.522748 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.524367 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gwr5k"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.533492 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.536844 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.539454 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.541356 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.542873 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.544984 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.546422 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gdsbs"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.547857 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8srf4"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.548789 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8srf4" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.549747 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.551562 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bgtck"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.554167 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tzmcd"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556058 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8srf4"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556349 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txnh5\" (UniqueName: \"kubernetes.io/projected/ea9233e9-49ad-4164-a7c5-1d02ca87560a-kube-api-access-txnh5\") pod \"kube-storage-version-migrator-operator-b67b599dd-thq9h\" (UID: \"ea9233e9-49ad-4164-a7c5-1d02ca87560a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556387 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d96dcf-b094-485c-8636-401ccc71e918-serving-cert\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556416 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9v4l\" (UniqueName: \"kubernetes.io/projected/71877c8f-d05b-42ca-ad80-7ee3277d9558-kube-api-access-h9v4l\") pod \"packageserver-d55dfcdfc-tzb9r\" (UID: \"71877c8f-d05b-42ca-ad80-7ee3277d9558\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556442 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc6f4802-99e9-4a10-bdfb-f132a81023eb-serving-cert\") pod \"console-operator-58897d9998-8h784\" (UID: \"bc6f4802-99e9-4a10-bdfb-f132a81023eb\") " pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556462 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-encryption-config\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556480 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b2a43de-fb0b-4468-af29-c7436e08fb13-etcd-service-ca\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556496 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b2a43de-fb0b-4468-af29-c7436e08fb13-etcd-client\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556513 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cc4969bb-ac64-4361-8666-99de6de39271-stats-auth\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556540 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlqdq\" (UniqueName: \"kubernetes.io/projected/c5b2667b-8dba-45fe-8bc5-ca0289d36d96-kube-api-access-zlqdq\") pod \"machine-config-operator-74547568cd-xbhzj\" (UID: \"c5b2667b-8dba-45fe-8bc5-ca0289d36d96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556564 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-audit-policies\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556585 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfd69ce1-51c3-44b0-81e5-576a633bff91-auth-proxy-config\") pod \"machine-approver-56656f9798-tgs4s\" (UID: \"bfd69ce1-51c3-44b0-81e5-576a633bff91\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556605 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71877c8f-d05b-42ca-ad80-7ee3277d9558-webhook-cert\") pod \"packageserver-d55dfcdfc-tzb9r\" (UID: \"71877c8f-d05b-42ca-ad80-7ee3277d9558\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556635 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea9233e9-49ad-4164-a7c5-1d02ca87560a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-thq9h\" (UID: \"ea9233e9-49ad-4164-a7c5-1d02ca87560a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556656 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxlfr\" (UniqueName: \"kubernetes.io/projected/d44c8c1f-8400-4291-b4c0-fa7d8a8d584a-kube-api-access-hxlfr\") pod \"package-server-manager-789f6589d5-cl5pv\" (UID: \"d44c8c1f-8400-4291-b4c0-fa7d8a8d584a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556677 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmpbk\" (UID: \"e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556698 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb416312-125b-4c11-9ecd-07a06d5e6c02-metrics-tls\") pod \"dns-default-bgtck\" (UID: \"cb416312-125b-4c11-9ecd-07a06d5e6c02\") " pod="openshift-dns/dns-default-bgtck" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556729 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cc4969bb-ac64-4361-8666-99de6de39271-default-certificate\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556786 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cbc1d8f6-fc70-4349-b70a-44751f67425f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4cnjz\" (UID: \"cbc1d8f6-fc70-4349-b70a-44751f67425f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556808 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv8zz\" (UniqueName: \"kubernetes.io/projected/82284043-1bad-4d00-ab41-f15f62b48fdb-kube-api-access-lv8zz\") pod \"service-ca-operator-777779d784-jfjwn\" (UID: \"82284043-1bad-4d00-ab41-f15f62b48fdb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556835 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556854 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-audit-dir\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556877 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-config\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556898 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v569s\" (UniqueName: \"kubernetes.io/projected/f9d96dcf-b094-485c-8636-401ccc71e918-kube-api-access-v569s\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556920 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zsh8m\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556941 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-audit-dir\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556973 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6f4802-99e9-4a10-bdfb-f132a81023eb-config\") pod \"console-operator-58897d9998-8h784\" (UID: \"bc6f4802-99e9-4a10-bdfb-f132a81023eb\") " pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.556993 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02e136c3-8346-4d92-bc4f-fbee60798447-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwkt\" (UID: \"02e136c3-8346-4d92-bc4f-fbee60798447\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557014 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g92lg\" (UniqueName: \"kubernetes.io/projected/4e24af32-2b91-4772-b26e-a683ba5c3d16-kube-api-access-g92lg\") pod \"openshift-config-operator-7777fb866f-qlbvc\" (UID: \"4e24af32-2b91-4772-b26e-a683ba5c3d16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557033 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-client-ca\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557055 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxznq\" (UniqueName: \"kubernetes.io/projected/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-kube-api-access-kxznq\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557077 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/73c298d0-47e0-4026-bb37-871f2357393d-profile-collector-cert\") pod \"catalog-operator-68c6474976-5h6r9\" (UID: \"73c298d0-47e0-4026-bb37-871f2357393d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557099 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgtd6\" (UniqueName: \"kubernetes.io/projected/bc6f4802-99e9-4a10-bdfb-f132a81023eb-kube-api-access-pgtd6\") pod \"console-operator-58897d9998-8h784\" (UID: \"bc6f4802-99e9-4a10-bdfb-f132a81023eb\") " pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557121 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b741ee63-8420-4991-9682-69a55770d9c6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-sjk2k\" (UID: \"b741ee63-8420-4991-9682-69a55770d9c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557140 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37d0c18f-2c13-4cb8-8523-853e305dfa47-service-ca-bundle\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557161 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmpbk\" (UID: \"e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557182 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcxk7\" (UniqueName: \"kubernetes.io/projected/e1ad943a-d557-4727-b4e9-a863aae1a47d-kube-api-access-pcxk7\") pod \"control-plane-machine-set-operator-78cbb6b69f-27rrp\" (UID: \"e1ad943a-d557-4727-b4e9-a863aae1a47d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557210 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dz9m\" (UniqueName: \"kubernetes.io/projected/7436f3bf-66e4-4314-aa0b-8af645dd5bee-kube-api-access-5dz9m\") pod \"marketplace-operator-79b997595-zsh8m\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557230 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/02381b1b-463c-4532-a690-deee86ffc674-images\") pod \"machine-api-operator-5694c8668f-swx9b\" (UID: \"02381b1b-463c-4532-a690-deee86ffc674\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557251 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/713c81b7-8d56-4d58-bd4e-f827de0ca17b-config-volume\") pod \"collect-profiles-29494980-dkmz9\" (UID: \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557273 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4jww\" (UniqueName: \"kubernetes.io/projected/e787f3bd-fa49-4961-9556-f5f0f25fca06-kube-api-access-d4jww\") pod \"dns-operator-744455d44c-8grgt\" (UID: \"e787f3bd-fa49-4961-9556-f5f0f25fca06\") " pod="openshift-dns-operator/dns-operator-744455d44c-8grgt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557294 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/02381b1b-463c-4532-a690-deee86ffc674-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-swx9b\" (UID: \"02381b1b-463c-4532-a690-deee86ffc674\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557316 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82284043-1bad-4d00-ab41-f15f62b48fdb-config\") pod \"service-ca-operator-777779d784-jfjwn\" (UID: \"82284043-1bad-4d00-ab41-f15f62b48fdb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557335 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557358 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npghp\" (UniqueName: \"kubernetes.io/projected/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-kube-api-access-npghp\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557378 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bfd69ce1-51c3-44b0-81e5-576a633bff91-machine-approver-tls\") pod \"machine-approver-56656f9798-tgs4s\" (UID: \"bfd69ce1-51c3-44b0-81e5-576a633bff91\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557400 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b2a43de-fb0b-4468-af29-c7436e08fb13-etcd-ca\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557421 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557440 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea9233e9-49ad-4164-a7c5-1d02ca87560a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-thq9h\" (UID: \"ea9233e9-49ad-4164-a7c5-1d02ca87560a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557472 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9295f40-e604-4f7f-9f79-f44f1a5f9bc3-config\") pod \"kube-controller-manager-operator-78b949d7b-z8v2d\" (UID: \"e9295f40-e604-4f7f-9f79-f44f1a5f9bc3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557491 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsvpk\" (UniqueName: \"kubernetes.io/projected/cb416312-125b-4c11-9ecd-07a06d5e6c02-kube-api-access-zsvpk\") pod \"dns-default-bgtck\" (UID: \"cb416312-125b-4c11-9ecd-07a06d5e6c02\") " pod="openshift-dns/dns-default-bgtck" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557551 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krnkn\" (UniqueName: \"kubernetes.io/projected/0df986bd-6456-42fd-b063-301263d6e7ce-kube-api-access-krnkn\") pod \"migrator-59844c95c7-5lxk8\" (UID: \"0df986bd-6456-42fd-b063-301263d6e7ce\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5lxk8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557572 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-node-pullsecrets\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557623 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b741ee63-8420-4991-9682-69a55770d9c6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-sjk2k\" (UID: \"b741ee63-8420-4991-9682-69a55770d9c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557661 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5b2667b-8dba-45fe-8bc5-ca0289d36d96-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xbhzj\" (UID: \"c5b2667b-8dba-45fe-8bc5-ca0289d36d96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557706 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a509e8-6d94-4a62-b97d-ea05b87b1b74-config\") pod \"kube-apiserver-operator-766d6c64bb-5qflb\" (UID: \"75a509e8-6d94-4a62-b97d-ea05b87b1b74\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557726 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-etcd-client\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557775 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-audit\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557801 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rghwj\" (UniqueName: \"kubernetes.io/projected/8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5-kube-api-access-rghwj\") pod \"downloads-7954f5f757-gwr5k\" (UID: \"8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5\") " pod="openshift-console/downloads-7954f5f757-gwr5k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557819 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-trusted-ca-bundle\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557945 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/737e8cb6-704f-4255-8985-ee18874f127d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w8bnd\" (UID: \"737e8cb6-704f-4255-8985-ee18874f127d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557973 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.557992 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hncd6\" (UniqueName: \"kubernetes.io/projected/9b19c06b-4261-4573-a304-a28976a1c610-kube-api-access-hncd6\") pod \"ingress-canary-8srf4\" (UID: \"9b19c06b-4261-4573-a304-a28976a1c610\") " pod="openshift-ingress-canary/ingress-canary-8srf4" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.558117 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-audit-policies\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.558143 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d0c18f-2c13-4cb8-8523-853e305dfa47-config\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.558270 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-oauth-config\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.558310 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wldms\" (UniqueName: \"kubernetes.io/projected/bfd69ce1-51c3-44b0-81e5-576a633bff91-kube-api-access-wldms\") pod \"machine-approver-56656f9798-tgs4s\" (UID: \"bfd69ce1-51c3-44b0-81e5-576a633bff91\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.559840 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9295f40-e604-4f7f-9f79-f44f1a5f9bc3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z8v2d\" (UID: \"e9295f40-e604-4f7f-9f79-f44f1a5f9bc3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561453 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82284043-1bad-4d00-ab41-f15f62b48fdb-serving-cert\") pod \"service-ca-operator-777779d784-jfjwn\" (UID: \"82284043-1bad-4d00-ab41-f15f62b48fdb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561474 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71877c8f-d05b-42ca-ad80-7ee3277d9558-apiservice-cert\") pod \"packageserver-d55dfcdfc-tzb9r\" (UID: \"71877c8f-d05b-42ca-ad80-7ee3277d9558\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561483 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc6f4802-99e9-4a10-bdfb-f132a81023eb-serving-cert\") pod \"console-operator-58897d9998-8h784\" (UID: \"bc6f4802-99e9-4a10-bdfb-f132a81023eb\") " pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561494 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l2nj\" (UniqueName: \"kubernetes.io/projected/0b2a43de-fb0b-4468-af29-c7436e08fb13-kube-api-access-2l2nj\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.560216 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lzb7h"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561531 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-encryption-config\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561583 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-node-pullsecrets\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561514 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737e8cb6-704f-4255-8985-ee18874f127d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w8bnd\" (UID: \"737e8cb6-704f-4255-8985-ee18874f127d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561099 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc6f4802-99e9-4a10-bdfb-f132a81023eb-config\") pod \"console-operator-58897d9998-8h784\" (UID: \"bc6f4802-99e9-4a10-bdfb-f132a81023eb\") " pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561616 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55fa1078-ff94-4efb-80ea-3bcf65192874-metrics-tls\") pod \"ingress-operator-5b745b69d9-c8czx\" (UID: \"55fa1078-ff94-4efb-80ea-3bcf65192874\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561671 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc6f4802-99e9-4a10-bdfb-f132a81023eb-trusted-ca\") pod \"console-operator-58897d9998-8h784\" (UID: \"bc6f4802-99e9-4a10-bdfb-f132a81023eb\") " pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561689 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-976x2\" (UniqueName: \"kubernetes.io/projected/02e136c3-8346-4d92-bc4f-fbee60798447-kube-api-access-976x2\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwkt\" (UID: \"02e136c3-8346-4d92-bc4f-fbee60798447\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561708 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561728 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b2a43de-fb0b-4468-af29-c7436e08fb13-serving-cert\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561748 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84xcd\" (UniqueName: \"kubernetes.io/projected/55fa1078-ff94-4efb-80ea-3bcf65192874-kube-api-access-84xcd\") pod \"ingress-operator-5b745b69d9-c8czx\" (UID: \"55fa1078-ff94-4efb-80ea-3bcf65192874\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561794 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02e136c3-8346-4d92-bc4f-fbee60798447-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwkt\" (UID: \"02e136c3-8346-4d92-bc4f-fbee60798447\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561814 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-serving-cert\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561831 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-config\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561847 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfd69ce1-51c3-44b0-81e5-576a633bff91-config\") pod \"machine-approver-56656f9798-tgs4s\" (UID: \"bfd69ce1-51c3-44b0-81e5-576a633bff91\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561864 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7vv7\" (UniqueName: \"kubernetes.io/projected/cc4969bb-ac64-4361-8666-99de6de39271-kube-api-access-x7vv7\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561881 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-oauth-serving-cert\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561904 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lgkk\" (UniqueName: \"kubernetes.io/projected/aa662f18-6ab4-43b8-8e65-8de41043b74d-kube-api-access-2lgkk\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561928 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737e8cb6-704f-4255-8985-ee18874f127d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w8bnd\" (UID: \"737e8cb6-704f-4255-8985-ee18874f127d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561954 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d44c8c1f-8400-4291-b4c0-fa7d8a8d584a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cl5pv\" (UID: \"d44c8c1f-8400-4291-b4c0-fa7d8a8d584a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561990 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71877c8f-d05b-42ca-ad80-7ee3277d9558-tmpfs\") pod \"packageserver-d55dfcdfc-tzb9r\" (UID: \"71877c8f-d05b-42ca-ad80-7ee3277d9558\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562012 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e24af32-2b91-4772-b26e-a683ba5c3d16-serving-cert\") pod \"openshift-config-operator-7777fb866f-qlbvc\" (UID: \"4e24af32-2b91-4772-b26e-a683ba5c3d16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562027 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc4969bb-ac64-4361-8666-99de6de39271-service-ca-bundle\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562043 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9295f40-e604-4f7f-9f79-f44f1a5f9bc3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z8v2d\" (UID: \"e9295f40-e604-4f7f-9f79-f44f1a5f9bc3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562058 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55fa1078-ff94-4efb-80ea-3bcf65192874-trusted-ca\") pod \"ingress-operator-5b745b69d9-c8czx\" (UID: \"55fa1078-ff94-4efb-80ea-3bcf65192874\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562075 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/713c81b7-8d56-4d58-bd4e-f827de0ca17b-secret-volume\") pod \"collect-profiles-29494980-dkmz9\" (UID: \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562093 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f35cc7a6-91ec-4a66-8bb1-6418c351eaf3-srv-cert\") pod \"olm-operator-6b444d44fb-lftvg\" (UID: \"f35cc7a6-91ec-4a66-8bb1-6418c351eaf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562110 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-image-import-ca\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562129 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e787f3bd-fa49-4961-9556-f5f0f25fca06-metrics-tls\") pod \"dns-operator-744455d44c-8grgt\" (UID: \"e787f3bd-fa49-4961-9556-f5f0f25fca06\") " pod="openshift-dns-operator/dns-operator-744455d44c-8grgt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562146 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562163 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-audit-dir\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562179 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562197 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562213 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f35cc7a6-91ec-4a66-8bb1-6418c351eaf3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lftvg\" (UID: \"f35cc7a6-91ec-4a66-8bb1-6418c351eaf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562233 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-serving-cert\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562249 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb416312-125b-4c11-9ecd-07a06d5e6c02-config-volume\") pod \"dns-default-bgtck\" (UID: \"cb416312-125b-4c11-9ecd-07a06d5e6c02\") " pod="openshift-dns/dns-default-bgtck" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562266 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5b2667b-8dba-45fe-8bc5-ca0289d36d96-images\") pod \"machine-config-operator-74547568cd-xbhzj\" (UID: \"c5b2667b-8dba-45fe-8bc5-ca0289d36d96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562281 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b19c06b-4261-4573-a304-a28976a1c610-cert\") pod \"ingress-canary-8srf4\" (UID: \"9b19c06b-4261-4573-a304-a28976a1c610\") " pod="openshift-ingress-canary/ingress-canary-8srf4" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562299 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85gmx\" (UniqueName: \"kubernetes.io/projected/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-kube-api-access-85gmx\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562316 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwn95\" (UniqueName: \"kubernetes.io/projected/b741ee63-8420-4991-9682-69a55770d9c6-kube-api-access-kwn95\") pod \"cluster-image-registry-operator-dc59b4c8b-sjk2k\" (UID: \"b741ee63-8420-4991-9682-69a55770d9c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562338 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qq2j\" (UniqueName: \"kubernetes.io/projected/73c298d0-47e0-4026-bb37-871f2357393d-kube-api-access-6qq2j\") pod \"catalog-operator-68c6474976-5h6r9\" (UID: \"73c298d0-47e0-4026-bb37-871f2357393d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562357 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-serving-cert\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562375 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5b2667b-8dba-45fe-8bc5-ca0289d36d96-proxy-tls\") pod \"machine-config-operator-74547568cd-xbhzj\" (UID: \"c5b2667b-8dba-45fe-8bc5-ca0289d36d96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562390 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h6vq\" (UniqueName: \"kubernetes.io/projected/f35cc7a6-91ec-4a66-8bb1-6418c351eaf3-kube-api-access-5h6vq\") pod \"olm-operator-6b444d44fb-lftvg\" (UID: \"f35cc7a6-91ec-4a66-8bb1-6418c351eaf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562391 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmpbk\" (UID: \"e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562407 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-etcd-client\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562454 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55fa1078-ff94-4efb-80ea-3bcf65192874-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c8czx\" (UID: \"55fa1078-ff94-4efb-80ea-3bcf65192874\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562486 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e1ad943a-d557-4727-b4e9-a863aae1a47d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-27rrp\" (UID: \"e1ad943a-d557-4727-b4e9-a863aae1a47d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562524 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfwvq\" (UniqueName: \"kubernetes.io/projected/37d0c18f-2c13-4cb8-8523-853e305dfa47-kube-api-access-qfwvq\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562554 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4e24af32-2b91-4772-b26e-a683ba5c3d16-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qlbvc\" (UID: \"4e24af32-2b91-4772-b26e-a683ba5c3d16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562583 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-service-ca\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562609 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc4969bb-ac64-4361-8666-99de6de39271-metrics-certs\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562635 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75a509e8-6d94-4a62-b97d-ea05b87b1b74-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5qflb\" (UID: \"75a509e8-6d94-4a62-b97d-ea05b87b1b74\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562668 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37d0c18f-2c13-4cb8-8523-853e305dfa47-serving-cert\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562693 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562701 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lzb7h"] Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562718 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdgff\" (UniqueName: \"kubernetes.io/projected/02381b1b-463c-4532-a690-deee86ffc674-kube-api-access-wdgff\") pod \"machine-api-operator-5694c8668f-swx9b\" (UID: \"02381b1b-463c-4532-a690-deee86ffc674\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562747 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-encryption-config\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562799 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562805 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2a43de-fb0b-4468-af29-c7436e08fb13-config\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562832 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562856 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562882 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562906 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-config\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562933 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cbc1d8f6-fc70-4349-b70a-44751f67425f-proxy-tls\") pod \"machine-config-controller-84d6567774-4cnjz\" (UID: \"cbc1d8f6-fc70-4349-b70a-44751f67425f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562957 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75a509e8-6d94-4a62-b97d-ea05b87b1b74-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5qflb\" (UID: \"75a509e8-6d94-4a62-b97d-ea05b87b1b74\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.562988 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.563016 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.563046 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.563072 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-etcd-serving-ca\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.563106 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt5vh\" (UniqueName: \"kubernetes.io/projected/cbc1d8f6-fc70-4349-b70a-44751f67425f-kube-api-access-vt5vh\") pod \"machine-config-controller-84d6567774-4cnjz\" (UID: \"cbc1d8f6-fc70-4349-b70a-44751f67425f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.563133 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrp29\" (UniqueName: \"kubernetes.io/projected/713c81b7-8d56-4d58-bd4e-f827de0ca17b-kube-api-access-hrp29\") pod \"collect-profiles-29494980-dkmz9\" (UID: \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.563158 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zsh8m\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.563192 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b741ee63-8420-4991-9682-69a55770d9c6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-sjk2k\" (UID: \"b741ee63-8420-4991-9682-69a55770d9c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.563217 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37d0c18f-2c13-4cb8-8523-853e305dfa47-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.563243 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/73c298d0-47e0-4026-bb37-871f2357393d-srv-cert\") pod \"catalog-operator-68c6474976-5h6r9\" (UID: \"73c298d0-47e0-4026-bb37-871f2357393d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.563271 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02381b1b-463c-4532-a690-deee86ffc674-config\") pod \"machine-api-operator-5694c8668f-swx9b\" (UID: \"02381b1b-463c-4532-a690-deee86ffc674\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.563299 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4b55\" (UniqueName: \"kubernetes.io/projected/e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9-kube-api-access-t4b55\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmpbk\" (UID: \"e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.563850 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/02381b1b-463c-4532-a690-deee86ffc674-images\") pod \"machine-api-operator-5694c8668f-swx9b\" (UID: \"02381b1b-463c-4532-a690-deee86ffc674\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.563949 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4e24af32-2b91-4772-b26e-a683ba5c3d16-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qlbvc\" (UID: \"4e24af32-2b91-4772-b26e-a683ba5c3d16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.564539 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/02381b1b-463c-4532-a690-deee86ffc674-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-swx9b\" (UID: \"02381b1b-463c-4532-a690-deee86ffc674\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.564698 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-service-ca\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.565400 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bfd69ce1-51c3-44b0-81e5-576a633bff91-auth-proxy-config\") pod \"machine-approver-56656f9798-tgs4s\" (UID: \"bfd69ce1-51c3-44b0-81e5-576a633bff91\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.565494 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d96dcf-b094-485c-8636-401ccc71e918-serving-cert\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.565738 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-etcd-client\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.566123 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-audit-policies\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.558621 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-audit-dir\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.567034 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-config\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.567372 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.567472 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfd69ce1-51c3-44b0-81e5-576a633bff91-config\") pod \"machine-approver-56656f9798-tgs4s\" (UID: \"bfd69ce1-51c3-44b0-81e5-576a633bff91\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.567726 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.567808 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-audit-dir\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.558899 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-audit-dir\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.559895 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-config\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.560202 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.560396 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-client-ca\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.568728 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-audit\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.570002 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-oauth-serving-cert\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.570391 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bfd69ce1-51c3-44b0-81e5-576a633bff91-machine-approver-tls\") pod \"machine-approver-56656f9798-tgs4s\" (UID: \"bfd69ce1-51c3-44b0-81e5-576a633bff91\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.570966 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.571031 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc6f4802-99e9-4a10-bdfb-f132a81023eb-trusted-ca\") pod \"console-operator-58897d9998-8h784\" (UID: \"bc6f4802-99e9-4a10-bdfb-f132a81023eb\") " pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.571116 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.571155 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-etcd-client\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.561073 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37d0c18f-2c13-4cb8-8523-853e305dfa47-service-ca-bundle\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.571437 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-trusted-ca-bundle\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.571777 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-serving-cert\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.571875 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37d0c18f-2c13-4cb8-8523-853e305dfa47-serving-cert\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.573076 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-audit-policies\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.573328 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-etcd-serving-ca\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.573382 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02e136c3-8346-4d92-bc4f-fbee60798447-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwkt\" (UID: \"02e136c3-8346-4d92-bc4f-fbee60798447\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.573407 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.573707 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-serving-cert\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.573841 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02e136c3-8346-4d92-bc4f-fbee60798447-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwkt\" (UID: \"02e136c3-8346-4d92-bc4f-fbee60798447\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.573891 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d0c18f-2c13-4cb8-8523-853e305dfa47-config\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.574036 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b741ee63-8420-4991-9682-69a55770d9c6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-sjk2k\" (UID: \"b741ee63-8420-4991-9682-69a55770d9c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.574156 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e24af32-2b91-4772-b26e-a683ba5c3d16-serving-cert\") pod \"openshift-config-operator-7777fb866f-qlbvc\" (UID: \"4e24af32-2b91-4772-b26e-a683ba5c3d16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.574348 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.574559 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmpbk\" (UID: \"e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.574607 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.575387 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b741ee63-8420-4991-9682-69a55770d9c6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-sjk2k\" (UID: \"b741ee63-8420-4991-9682-69a55770d9c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.575828 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37d0c18f-2c13-4cb8-8523-853e305dfa47-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.576016 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-config\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.576461 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02381b1b-463c-4532-a690-deee86ffc674-config\") pod \"machine-api-operator-5694c8668f-swx9b\" (UID: \"02381b1b-463c-4532-a690-deee86ffc674\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.581172 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.583336 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-image-import-ca\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.584457 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.586462 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-serving-cert\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.587279 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.594177 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.594728 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e787f3bd-fa49-4961-9556-f5f0f25fca06-metrics-tls\") pod \"dns-operator-744455d44c-8grgt\" (UID: \"e787f3bd-fa49-4961-9556-f5f0f25fca06\") " pod="openshift-dns-operator/dns-operator-744455d44c-8grgt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.596295 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.598450 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-oauth-config\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.599104 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-encryption-config\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.599468 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.617185 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.636661 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.656499 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664132 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea9233e9-49ad-4164-a7c5-1d02ca87560a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-thq9h\" (UID: \"ea9233e9-49ad-4164-a7c5-1d02ca87560a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664162 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9295f40-e604-4f7f-9f79-f44f1a5f9bc3-config\") pod \"kube-controller-manager-operator-78b949d7b-z8v2d\" (UID: \"e9295f40-e604-4f7f-9f79-f44f1a5f9bc3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664183 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsvpk\" (UniqueName: \"kubernetes.io/projected/cb416312-125b-4c11-9ecd-07a06d5e6c02-kube-api-access-zsvpk\") pod \"dns-default-bgtck\" (UID: \"cb416312-125b-4c11-9ecd-07a06d5e6c02\") " pod="openshift-dns/dns-default-bgtck" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664206 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krnkn\" (UniqueName: \"kubernetes.io/projected/0df986bd-6456-42fd-b063-301263d6e7ce-kube-api-access-krnkn\") pod \"migrator-59844c95c7-5lxk8\" (UID: \"0df986bd-6456-42fd-b063-301263d6e7ce\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5lxk8" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664223 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5b2667b-8dba-45fe-8bc5-ca0289d36d96-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xbhzj\" (UID: \"c5b2667b-8dba-45fe-8bc5-ca0289d36d96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664239 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a509e8-6d94-4a62-b97d-ea05b87b1b74-config\") pod \"kube-apiserver-operator-766d6c64bb-5qflb\" (UID: \"75a509e8-6d94-4a62-b97d-ea05b87b1b74\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664261 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/737e8cb6-704f-4255-8985-ee18874f127d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w8bnd\" (UID: \"737e8cb6-704f-4255-8985-ee18874f127d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664277 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hncd6\" (UniqueName: \"kubernetes.io/projected/9b19c06b-4261-4573-a304-a28976a1c610-kube-api-access-hncd6\") pod \"ingress-canary-8srf4\" (UID: \"9b19c06b-4261-4573-a304-a28976a1c610\") " pod="openshift-ingress-canary/ingress-canary-8srf4" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664301 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9295f40-e604-4f7f-9f79-f44f1a5f9bc3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z8v2d\" (UID: \"e9295f40-e604-4f7f-9f79-f44f1a5f9bc3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664316 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82284043-1bad-4d00-ab41-f15f62b48fdb-serving-cert\") pod \"service-ca-operator-777779d784-jfjwn\" (UID: \"82284043-1bad-4d00-ab41-f15f62b48fdb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664332 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71877c8f-d05b-42ca-ad80-7ee3277d9558-apiservice-cert\") pod \"packageserver-d55dfcdfc-tzb9r\" (UID: \"71877c8f-d05b-42ca-ad80-7ee3277d9558\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664349 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l2nj\" (UniqueName: \"kubernetes.io/projected/0b2a43de-fb0b-4468-af29-c7436e08fb13-kube-api-access-2l2nj\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664365 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737e8cb6-704f-4255-8985-ee18874f127d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w8bnd\" (UID: \"737e8cb6-704f-4255-8985-ee18874f127d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664380 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55fa1078-ff94-4efb-80ea-3bcf65192874-metrics-tls\") pod \"ingress-operator-5b745b69d9-c8czx\" (UID: \"55fa1078-ff94-4efb-80ea-3bcf65192874\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664400 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b2a43de-fb0b-4468-af29-c7436e08fb13-serving-cert\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664415 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84xcd\" (UniqueName: \"kubernetes.io/projected/55fa1078-ff94-4efb-80ea-3bcf65192874-kube-api-access-84xcd\") pod \"ingress-operator-5b745b69d9-c8czx\" (UID: \"55fa1078-ff94-4efb-80ea-3bcf65192874\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664440 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7vv7\" (UniqueName: \"kubernetes.io/projected/cc4969bb-ac64-4361-8666-99de6de39271-kube-api-access-x7vv7\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664463 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737e8cb6-704f-4255-8985-ee18874f127d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w8bnd\" (UID: \"737e8cb6-704f-4255-8985-ee18874f127d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664480 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d44c8c1f-8400-4291-b4c0-fa7d8a8d584a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cl5pv\" (UID: \"d44c8c1f-8400-4291-b4c0-fa7d8a8d584a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664502 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71877c8f-d05b-42ca-ad80-7ee3277d9558-tmpfs\") pod \"packageserver-d55dfcdfc-tzb9r\" (UID: \"71877c8f-d05b-42ca-ad80-7ee3277d9558\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664516 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc4969bb-ac64-4361-8666-99de6de39271-service-ca-bundle\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664530 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9295f40-e604-4f7f-9f79-f44f1a5f9bc3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z8v2d\" (UID: \"e9295f40-e604-4f7f-9f79-f44f1a5f9bc3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664545 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55fa1078-ff94-4efb-80ea-3bcf65192874-trusted-ca\") pod \"ingress-operator-5b745b69d9-c8czx\" (UID: \"55fa1078-ff94-4efb-80ea-3bcf65192874\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664561 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/713c81b7-8d56-4d58-bd4e-f827de0ca17b-secret-volume\") pod \"collect-profiles-29494980-dkmz9\" (UID: \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664575 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f35cc7a6-91ec-4a66-8bb1-6418c351eaf3-srv-cert\") pod \"olm-operator-6b444d44fb-lftvg\" (UID: \"f35cc7a6-91ec-4a66-8bb1-6418c351eaf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664599 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f35cc7a6-91ec-4a66-8bb1-6418c351eaf3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lftvg\" (UID: \"f35cc7a6-91ec-4a66-8bb1-6418c351eaf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664613 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb416312-125b-4c11-9ecd-07a06d5e6c02-config-volume\") pod \"dns-default-bgtck\" (UID: \"cb416312-125b-4c11-9ecd-07a06d5e6c02\") " pod="openshift-dns/dns-default-bgtck" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664627 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5b2667b-8dba-45fe-8bc5-ca0289d36d96-images\") pod \"machine-config-operator-74547568cd-xbhzj\" (UID: \"c5b2667b-8dba-45fe-8bc5-ca0289d36d96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664640 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b19c06b-4261-4573-a304-a28976a1c610-cert\") pod \"ingress-canary-8srf4\" (UID: \"9b19c06b-4261-4573-a304-a28976a1c610\") " pod="openshift-ingress-canary/ingress-canary-8srf4" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664665 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qq2j\" (UniqueName: \"kubernetes.io/projected/73c298d0-47e0-4026-bb37-871f2357393d-kube-api-access-6qq2j\") pod \"catalog-operator-68c6474976-5h6r9\" (UID: \"73c298d0-47e0-4026-bb37-871f2357393d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664681 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5b2667b-8dba-45fe-8bc5-ca0289d36d96-proxy-tls\") pod \"machine-config-operator-74547568cd-xbhzj\" (UID: \"c5b2667b-8dba-45fe-8bc5-ca0289d36d96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664696 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h6vq\" (UniqueName: \"kubernetes.io/projected/f35cc7a6-91ec-4a66-8bb1-6418c351eaf3-kube-api-access-5h6vq\") pod \"olm-operator-6b444d44fb-lftvg\" (UID: \"f35cc7a6-91ec-4a66-8bb1-6418c351eaf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664710 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55fa1078-ff94-4efb-80ea-3bcf65192874-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c8czx\" (UID: \"55fa1078-ff94-4efb-80ea-3bcf65192874\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664730 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e1ad943a-d557-4727-b4e9-a863aae1a47d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-27rrp\" (UID: \"e1ad943a-d557-4727-b4e9-a863aae1a47d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664751 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc4969bb-ac64-4361-8666-99de6de39271-metrics-certs\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664782 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75a509e8-6d94-4a62-b97d-ea05b87b1b74-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5qflb\" (UID: \"75a509e8-6d94-4a62-b97d-ea05b87b1b74\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664803 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2a43de-fb0b-4468-af29-c7436e08fb13-config\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664828 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cbc1d8f6-fc70-4349-b70a-44751f67425f-proxy-tls\") pod \"machine-config-controller-84d6567774-4cnjz\" (UID: \"cbc1d8f6-fc70-4349-b70a-44751f67425f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664843 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75a509e8-6d94-4a62-b97d-ea05b87b1b74-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5qflb\" (UID: \"75a509e8-6d94-4a62-b97d-ea05b87b1b74\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664865 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt5vh\" (UniqueName: \"kubernetes.io/projected/cbc1d8f6-fc70-4349-b70a-44751f67425f-kube-api-access-vt5vh\") pod \"machine-config-controller-84d6567774-4cnjz\" (UID: \"cbc1d8f6-fc70-4349-b70a-44751f67425f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664880 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrp29\" (UniqueName: \"kubernetes.io/projected/713c81b7-8d56-4d58-bd4e-f827de0ca17b-kube-api-access-hrp29\") pod \"collect-profiles-29494980-dkmz9\" (UID: \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664896 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zsh8m\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664912 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/73c298d0-47e0-4026-bb37-871f2357393d-srv-cert\") pod \"catalog-operator-68c6474976-5h6r9\" (UID: \"73c298d0-47e0-4026-bb37-871f2357393d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664940 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txnh5\" (UniqueName: \"kubernetes.io/projected/ea9233e9-49ad-4164-a7c5-1d02ca87560a-kube-api-access-txnh5\") pod \"kube-storage-version-migrator-operator-b67b599dd-thq9h\" (UID: \"ea9233e9-49ad-4164-a7c5-1d02ca87560a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664975 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9v4l\" (UniqueName: \"kubernetes.io/projected/71877c8f-d05b-42ca-ad80-7ee3277d9558-kube-api-access-h9v4l\") pod \"packageserver-d55dfcdfc-tzb9r\" (UID: \"71877c8f-d05b-42ca-ad80-7ee3277d9558\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665000 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b2a43de-fb0b-4468-af29-c7436e08fb13-etcd-service-ca\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665016 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b2a43de-fb0b-4468-af29-c7436e08fb13-etcd-client\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665030 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cc4969bb-ac64-4361-8666-99de6de39271-stats-auth\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665045 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlqdq\" (UniqueName: \"kubernetes.io/projected/c5b2667b-8dba-45fe-8bc5-ca0289d36d96-kube-api-access-zlqdq\") pod \"machine-config-operator-74547568cd-xbhzj\" (UID: \"c5b2667b-8dba-45fe-8bc5-ca0289d36d96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665062 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71877c8f-d05b-42ca-ad80-7ee3277d9558-webhook-cert\") pod \"packageserver-d55dfcdfc-tzb9r\" (UID: \"71877c8f-d05b-42ca-ad80-7ee3277d9558\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665076 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea9233e9-49ad-4164-a7c5-1d02ca87560a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-thq9h\" (UID: \"ea9233e9-49ad-4164-a7c5-1d02ca87560a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665093 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxlfr\" (UniqueName: \"kubernetes.io/projected/d44c8c1f-8400-4291-b4c0-fa7d8a8d584a-kube-api-access-hxlfr\") pod \"package-server-manager-789f6589d5-cl5pv\" (UID: \"d44c8c1f-8400-4291-b4c0-fa7d8a8d584a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665116 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cc4969bb-ac64-4361-8666-99de6de39271-default-certificate\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665129 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb416312-125b-4c11-9ecd-07a06d5e6c02-metrics-tls\") pod \"dns-default-bgtck\" (UID: \"cb416312-125b-4c11-9ecd-07a06d5e6c02\") " pod="openshift-dns/dns-default-bgtck" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665144 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cbc1d8f6-fc70-4349-b70a-44751f67425f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4cnjz\" (UID: \"cbc1d8f6-fc70-4349-b70a-44751f67425f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665159 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv8zz\" (UniqueName: \"kubernetes.io/projected/82284043-1bad-4d00-ab41-f15f62b48fdb-kube-api-access-lv8zz\") pod \"service-ca-operator-777779d784-jfjwn\" (UID: \"82284043-1bad-4d00-ab41-f15f62b48fdb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665181 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zsh8m\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665217 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/73c298d0-47e0-4026-bb37-871f2357393d-profile-collector-cert\") pod \"catalog-operator-68c6474976-5h6r9\" (UID: \"73c298d0-47e0-4026-bb37-871f2357393d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665243 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcxk7\" (UniqueName: \"kubernetes.io/projected/e1ad943a-d557-4727-b4e9-a863aae1a47d-kube-api-access-pcxk7\") pod \"control-plane-machine-set-operator-78cbb6b69f-27rrp\" (UID: \"e1ad943a-d557-4727-b4e9-a863aae1a47d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665258 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dz9m\" (UniqueName: \"kubernetes.io/projected/7436f3bf-66e4-4314-aa0b-8af645dd5bee-kube-api-access-5dz9m\") pod \"marketplace-operator-79b997595-zsh8m\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665273 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/713c81b7-8d56-4d58-bd4e-f827de0ca17b-config-volume\") pod \"collect-profiles-29494980-dkmz9\" (UID: \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665293 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82284043-1bad-4d00-ab41-f15f62b48fdb-config\") pod \"service-ca-operator-777779d784-jfjwn\" (UID: \"82284043-1bad-4d00-ab41-f15f62b48fdb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665313 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b2a43de-fb0b-4468-af29-c7436e08fb13-etcd-ca\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665704 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc4969bb-ac64-4361-8666-99de6de39271-service-ca-bundle\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.665908 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0b2a43de-fb0b-4468-af29-c7436e08fb13-etcd-ca\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.666304 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2a43de-fb0b-4468-af29-c7436e08fb13-config\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.664934 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/71877c8f-d05b-42ca-ad80-7ee3277d9558-tmpfs\") pod \"packageserver-d55dfcdfc-tzb9r\" (UID: \"71877c8f-d05b-42ca-ad80-7ee3277d9558\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.667272 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5b2667b-8dba-45fe-8bc5-ca0289d36d96-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xbhzj\" (UID: \"c5b2667b-8dba-45fe-8bc5-ca0289d36d96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.667272 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b2a43de-fb0b-4468-af29-c7436e08fb13-serving-cert\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.667377 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b2a43de-fb0b-4468-af29-c7436e08fb13-etcd-service-ca\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.668333 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cbc1d8f6-fc70-4349-b70a-44751f67425f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4cnjz\" (UID: \"cbc1d8f6-fc70-4349-b70a-44751f67425f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.668786 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc4969bb-ac64-4361-8666-99de6de39271-metrics-certs\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.669557 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cc4969bb-ac64-4361-8666-99de6de39271-default-certificate\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.669874 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b2a43de-fb0b-4468-af29-c7436e08fb13-etcd-client\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.671804 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cc4969bb-ac64-4361-8666-99de6de39271-stats-auth\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.676491 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.697098 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.700351 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.702503 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.703163 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.703251 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.716537 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.737090 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.757072 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.768822 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75a509e8-6d94-4a62-b97d-ea05b87b1b74-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5qflb\" (UID: \"75a509e8-6d94-4a62-b97d-ea05b87b1b74\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.776565 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.785446 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a509e8-6d94-4a62-b97d-ea05b87b1b74-config\") pod \"kube-apiserver-operator-766d6c64bb-5qflb\" (UID: \"75a509e8-6d94-4a62-b97d-ea05b87b1b74\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.796855 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.837742 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.837934 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwlxb\" (UniqueName: \"kubernetes.io/projected/58f39c40-ed87-43dd-90d0-d892d4a56375-kube-api-access-pwlxb\") pod \"route-controller-manager-6576b87f9c-phzrb\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.848439 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55fa1078-ff94-4efb-80ea-3bcf65192874-metrics-tls\") pod \"ingress-operator-5b745b69d9-c8czx\" (UID: \"55fa1078-ff94-4efb-80ea-3bcf65192874\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.861118 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.877773 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.878914 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea9233e9-49ad-4164-a7c5-1d02ca87560a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-thq9h\" (UID: \"ea9233e9-49ad-4164-a7c5-1d02ca87560a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.896913 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.917253 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.938720 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.947685 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea9233e9-49ad-4164-a7c5-1d02ca87560a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-thq9h\" (UID: \"ea9233e9-49ad-4164-a7c5-1d02ca87560a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.957856 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.976595 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 15:03:48 crc kubenswrapper[4620]: I0129 15:03:48.997330 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.016638 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.036853 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.056443 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.083562 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.087344 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55fa1078-ff94-4efb-80ea-3bcf65192874-trusted-ca\") pod \"ingress-operator-5b745b69d9-c8czx\" (UID: \"55fa1078-ff94-4efb-80ea-3bcf65192874\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.116611 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.116952 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.136670 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.157047 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.168793 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737e8cb6-704f-4255-8985-ee18874f127d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w8bnd\" (UID: \"737e8cb6-704f-4255-8985-ee18874f127d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.177587 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.186963 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737e8cb6-704f-4255-8985-ee18874f127d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w8bnd\" (UID: \"737e8cb6-704f-4255-8985-ee18874f127d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.197653 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.216730 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.238113 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cbc1d8f6-fc70-4349-b70a-44751f67425f-proxy-tls\") pod \"machine-config-controller-84d6567774-4cnjz\" (UID: \"cbc1d8f6-fc70-4349-b70a-44751f67425f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.238264 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.258861 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.270344 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9295f40-e604-4f7f-9f79-f44f1a5f9bc3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z8v2d\" (UID: \"e9295f40-e604-4f7f-9f79-f44f1a5f9bc3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.277031 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.285455 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9295f40-e604-4f7f-9f79-f44f1a5f9bc3-config\") pod \"kube-controller-manager-operator-78b949d7b-z8v2d\" (UID: \"e9295f40-e604-4f7f-9f79-f44f1a5f9bc3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.296981 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.316468 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.328558 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb"] Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.336494 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 15:03:49 crc kubenswrapper[4620]: W0129 15:03:49.336982 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58f39c40_ed87_43dd_90d0_d892d4a56375.slice/crio-7d19f101c4f72540f3a8a430136a2ef23f3f8484262961f8c3c3886a9e57ac57 WatchSource:0}: Error finding container 7d19f101c4f72540f3a8a430136a2ef23f3f8484262961f8c3c3886a9e57ac57: Status 404 returned error can't find the container with id 7d19f101c4f72540f3a8a430136a2ef23f3f8484262961f8c3c3886a9e57ac57 Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.346990 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5b2667b-8dba-45fe-8bc5-ca0289d36d96-images\") pod \"machine-config-operator-74547568cd-xbhzj\" (UID: \"c5b2667b-8dba-45fe-8bc5-ca0289d36d96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.356586 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.377415 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.390272 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5b2667b-8dba-45fe-8bc5-ca0289d36d96-proxy-tls\") pod \"machine-config-operator-74547568cd-xbhzj\" (UID: \"c5b2667b-8dba-45fe-8bc5-ca0289d36d96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.396602 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.401419 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71877c8f-d05b-42ca-ad80-7ee3277d9558-webhook-cert\") pod \"packageserver-d55dfcdfc-tzb9r\" (UID: \"71877c8f-d05b-42ca-ad80-7ee3277d9558\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.408910 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71877c8f-d05b-42ca-ad80-7ee3277d9558-apiservice-cert\") pod \"packageserver-d55dfcdfc-tzb9r\" (UID: \"71877c8f-d05b-42ca-ad80-7ee3277d9558\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.419914 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.434960 4620 request.go:700] Waited for 1.012060125s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.435988 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.456141 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.468887 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d44c8c1f-8400-4291-b4c0-fa7d8a8d584a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cl5pv\" (UID: \"d44c8c1f-8400-4291-b4c0-fa7d8a8d584a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.477044 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.483031 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82634d3f-d985-4384-bd37-426d509d4e57-metrics-certs\") pod \"network-metrics-daemon-twqvf\" (UID: \"82634d3f-d985-4384-bd37-426d509d4e57\") " pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.489591 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhz5z\" (UniqueName: \"kubernetes.io/projected/9b6421d4-a97f-4867-a78c-50ba4d6486ea-kube-api-access-rhz5z\") pod \"cluster-samples-operator-665b6dd947-qpx6x\" (UID: \"9b6421d4-a97f-4867-a78c-50ba4d6486ea\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.496575 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.499714 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/73c298d0-47e0-4026-bb37-871f2357393d-profile-collector-cert\") pod \"catalog-operator-68c6474976-5h6r9\" (UID: \"73c298d0-47e0-4026-bb37-871f2357393d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.509533 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f35cc7a6-91ec-4a66-8bb1-6418c351eaf3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lftvg\" (UID: \"f35cc7a6-91ec-4a66-8bb1-6418c351eaf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.510024 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/713c81b7-8d56-4d58-bd4e-f827de0ca17b-secret-volume\") pod \"collect-profiles-29494980-dkmz9\" (UID: \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.517455 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.528606 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-twqvf" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.529508 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/73c298d0-47e0-4026-bb37-871f2357393d-srv-cert\") pod \"catalog-operator-68c6474976-5h6r9\" (UID: \"73c298d0-47e0-4026-bb37-871f2357393d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.540840 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.556387 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.574345 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.576047 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.597065 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.617191 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.637341 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.656201 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.664900 4620 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.664988 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82284043-1bad-4d00-ab41-f15f62b48fdb-serving-cert podName:82284043-1bad-4d00-ab41-f15f62b48fdb nodeName:}" failed. No retries permitted until 2026-01-29 15:03:50.164966825 +0000 UTC m=+170.777794470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/82284043-1bad-4d00-ab41-f15f62b48fdb-serving-cert") pod "service-ca-operator-777779d784-jfjwn" (UID: "82284043-1bad-4d00-ab41-f15f62b48fdb") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.666040 4620 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.666089 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f35cc7a6-91ec-4a66-8bb1-6418c351eaf3-srv-cert podName:f35cc7a6-91ec-4a66-8bb1-6418c351eaf3 nodeName:}" failed. No retries permitted until 2026-01-29 15:03:50.166076051 +0000 UTC m=+170.778903696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f35cc7a6-91ec-4a66-8bb1-6418c351eaf3-srv-cert") pod "olm-operator-6b444d44fb-lftvg" (UID: "f35cc7a6-91ec-4a66-8bb1-6418c351eaf3") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.666090 4620 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.666138 4620 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.666169 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb416312-125b-4c11-9ecd-07a06d5e6c02-config-volume podName:cb416312-125b-4c11-9ecd-07a06d5e6c02 nodeName:}" failed. No retries permitted until 2026-01-29 15:03:50.166142843 +0000 UTC m=+170.778970478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cb416312-125b-4c11-9ecd-07a06d5e6c02-config-volume") pod "dns-default-bgtck" (UID: "cb416312-125b-4c11-9ecd-07a06d5e6c02") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.666042 4620 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.666188 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1ad943a-d557-4727-b4e9-a863aae1a47d-control-plane-machine-set-operator-tls podName:e1ad943a-d557-4727-b4e9-a863aae1a47d nodeName:}" failed. No retries permitted until 2026-01-29 15:03:50.166176454 +0000 UTC m=+170.779004189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/e1ad943a-d557-4727-b4e9-a863aae1a47d-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-27rrp" (UID: "e1ad943a-d557-4727-b4e9-a863aae1a47d") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.666215 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b19c06b-4261-4573-a304-a28976a1c610-cert podName:9b19c06b-4261-4573-a304-a28976a1c610 nodeName:}" failed. No retries permitted until 2026-01-29 15:03:50.166203535 +0000 UTC m=+170.779031270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9b19c06b-4261-4573-a304-a28976a1c610-cert") pod "ingress-canary-8srf4" (UID: "9b19c06b-4261-4573-a304-a28976a1c610") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.667486 4620 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.667504 4620 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.667537 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb416312-125b-4c11-9ecd-07a06d5e6c02-metrics-tls podName:cb416312-125b-4c11-9ecd-07a06d5e6c02 nodeName:}" failed. No retries permitted until 2026-01-29 15:03:50.167522146 +0000 UTC m=+170.780349861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cb416312-125b-4c11-9ecd-07a06d5e6c02-metrics-tls") pod "dns-default-bgtck" (UID: "cb416312-125b-4c11-9ecd-07a06d5e6c02") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.667559 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-operator-metrics podName:7436f3bf-66e4-4314-aa0b-8af645dd5bee nodeName:}" failed. No retries permitted until 2026-01-29 15:03:50.167549897 +0000 UTC m=+170.780377662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-operator-metrics") pod "marketplace-operator-79b997595-zsh8m" (UID: "7436f3bf-66e4-4314-aa0b-8af645dd5bee") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.667600 4620 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.667605 4620 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.667631 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/713c81b7-8d56-4d58-bd4e-f827de0ca17b-config-volume podName:713c81b7-8d56-4d58-bd4e-f827de0ca17b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:50.167623259 +0000 UTC m=+170.780450994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/713c81b7-8d56-4d58-bd4e-f827de0ca17b-config-volume") pod "collect-profiles-29494980-dkmz9" (UID: "713c81b7-8d56-4d58-bd4e-f827de0ca17b") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.667646 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-trusted-ca podName:7436f3bf-66e4-4314-aa0b-8af645dd5bee nodeName:}" failed. No retries permitted until 2026-01-29 15:03:50.16763915 +0000 UTC m=+170.780466895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-trusted-ca") pod "marketplace-operator-79b997595-zsh8m" (UID: "7436f3bf-66e4-4314-aa0b-8af645dd5bee") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.667657 4620 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: E0129 15:03:49.667687 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/82284043-1bad-4d00-ab41-f15f62b48fdb-config podName:82284043-1bad-4d00-ab41-f15f62b48fdb nodeName:}" failed. No retries permitted until 2026-01-29 15:03:50.167677081 +0000 UTC m=+170.780504806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/82284043-1bad-4d00-ab41-f15f62b48fdb-config") pod "service-ca-operator-777779d784-jfjwn" (UID: "82284043-1bad-4d00-ab41-f15f62b48fdb") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.677865 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.694539 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-twqvf"] Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.698466 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 15:03:49 crc kubenswrapper[4620]: W0129 15:03:49.706902 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82634d3f_d985_4384_bd37_426d509d4e57.slice/crio-2fb5f221979c27ede83bd0713e26fd60caf12a0b982349e3a2032add4818edfb WatchSource:0}: Error finding container 2fb5f221979c27ede83bd0713e26fd60caf12a0b982349e3a2032add4818edfb: Status 404 returned error can't find the container with id 2fb5f221979c27ede83bd0713e26fd60caf12a0b982349e3a2032add4818edfb Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.717769 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.736607 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.753473 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x"] Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.756850 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.777207 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.782070 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-twqvf" event={"ID":"82634d3f-d985-4384-bd37-426d509d4e57","Type":"ContainerStarted","Data":"2fb5f221979c27ede83bd0713e26fd60caf12a0b982349e3a2032add4818edfb"} Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.784312 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" event={"ID":"58f39c40-ed87-43dd-90d0-d892d4a56375","Type":"ContainerStarted","Data":"0c3ba8b41a42de2e08920d865a9072841f7a9e29e605ad2e035f4d7f8bc432ca"} Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.784383 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" event={"ID":"58f39c40-ed87-43dd-90d0-d892d4a56375","Type":"ContainerStarted","Data":"7d19f101c4f72540f3a8a430136a2ef23f3f8484262961f8c3c3886a9e57ac57"} Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.784551 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.789599 4620 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-phzrb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.789653 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" podUID="58f39c40-ed87-43dd-90d0-d892d4a56375" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.797964 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.816646 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.837140 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.857217 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.876406 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.918239 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.918312 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.950714 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.955707 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.976151 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 15:03:49 crc kubenswrapper[4620]: I0129 15:03:49.996088 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.016349 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.037400 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.056567 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.076821 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.096422 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.118212 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.137420 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.157903 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.176941 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.205095 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.216997 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.221013 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/713c81b7-8d56-4d58-bd4e-f827de0ca17b-config-volume\") pod \"collect-profiles-29494980-dkmz9\" (UID: \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.221970 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/713c81b7-8d56-4d58-bd4e-f827de0ca17b-config-volume\") pod \"collect-profiles-29494980-dkmz9\" (UID: \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.222082 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82284043-1bad-4d00-ab41-f15f62b48fdb-config\") pod \"service-ca-operator-777779d784-jfjwn\" (UID: \"82284043-1bad-4d00-ab41-f15f62b48fdb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.222589 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82284043-1bad-4d00-ab41-f15f62b48fdb-serving-cert\") pod \"service-ca-operator-777779d784-jfjwn\" (UID: \"82284043-1bad-4d00-ab41-f15f62b48fdb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.222708 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f35cc7a6-91ec-4a66-8bb1-6418c351eaf3-srv-cert\") pod \"olm-operator-6b444d44fb-lftvg\" (UID: \"f35cc7a6-91ec-4a66-8bb1-6418c351eaf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.222751 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb416312-125b-4c11-9ecd-07a06d5e6c02-config-volume\") pod \"dns-default-bgtck\" (UID: \"cb416312-125b-4c11-9ecd-07a06d5e6c02\") " pod="openshift-dns/dns-default-bgtck" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.222808 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b19c06b-4261-4573-a304-a28976a1c610-cert\") pod \"ingress-canary-8srf4\" (UID: \"9b19c06b-4261-4573-a304-a28976a1c610\") " pod="openshift-ingress-canary/ingress-canary-8srf4" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.222897 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e1ad943a-d557-4727-b4e9-a863aae1a47d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-27rrp\" (UID: \"e1ad943a-d557-4727-b4e9-a863aae1a47d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.223001 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zsh8m\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.223124 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb416312-125b-4c11-9ecd-07a06d5e6c02-metrics-tls\") pod \"dns-default-bgtck\" (UID: \"cb416312-125b-4c11-9ecd-07a06d5e6c02\") " pod="openshift-dns/dns-default-bgtck" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.223190 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zsh8m\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.224332 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb416312-125b-4c11-9ecd-07a06d5e6c02-config-volume\") pod \"dns-default-bgtck\" (UID: \"cb416312-125b-4c11-9ecd-07a06d5e6c02\") " pod="openshift-dns/dns-default-bgtck" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.225185 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zsh8m\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.226538 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82284043-1bad-4d00-ab41-f15f62b48fdb-config\") pod \"service-ca-operator-777779d784-jfjwn\" (UID: \"82284043-1bad-4d00-ab41-f15f62b48fdb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.227020 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e1ad943a-d557-4727-b4e9-a863aae1a47d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-27rrp\" (UID: \"e1ad943a-d557-4727-b4e9-a863aae1a47d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.227742 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f35cc7a6-91ec-4a66-8bb1-6418c351eaf3-srv-cert\") pod \"olm-operator-6b444d44fb-lftvg\" (UID: \"f35cc7a6-91ec-4a66-8bb1-6418c351eaf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.229978 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b19c06b-4261-4573-a304-a28976a1c610-cert\") pod \"ingress-canary-8srf4\" (UID: \"9b19c06b-4261-4573-a304-a28976a1c610\") " pod="openshift-ingress-canary/ingress-canary-8srf4" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.230058 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cb416312-125b-4c11-9ecd-07a06d5e6c02-metrics-tls\") pod \"dns-default-bgtck\" (UID: \"cb416312-125b-4c11-9ecd-07a06d5e6c02\") " pod="openshift-dns/dns-default-bgtck" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.230114 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82284043-1bad-4d00-ab41-f15f62b48fdb-serving-cert\") pod \"service-ca-operator-777779d784-jfjwn\" (UID: \"82284043-1bad-4d00-ab41-f15f62b48fdb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.230201 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zsh8m\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.271543 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v569s\" (UniqueName: \"kubernetes.io/projected/f9d96dcf-b094-485c-8636-401ccc71e918-kube-api-access-v569s\") pod \"controller-manager-879f6c89f-2j567\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.291858 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxznq\" (UniqueName: \"kubernetes.io/projected/f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b-kube-api-access-kxznq\") pod \"apiserver-7bbb656c7d-stj67\" (UID: \"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.310954 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgtd6\" (UniqueName: \"kubernetes.io/projected/bc6f4802-99e9-4a10-bdfb-f132a81023eb-kube-api-access-pgtd6\") pod \"console-operator-58897d9998-8h784\" (UID: \"bc6f4802-99e9-4a10-bdfb-f132a81023eb\") " pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.332553 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b741ee63-8420-4991-9682-69a55770d9c6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-sjk2k\" (UID: \"b741ee63-8420-4991-9682-69a55770d9c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.345066 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.357172 4620 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.361955 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.377740 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.422518 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g92lg\" (UniqueName: \"kubernetes.io/projected/4e24af32-2b91-4772-b26e-a683ba5c3d16-kube-api-access-g92lg\") pod \"openshift-config-operator-7777fb866f-qlbvc\" (UID: \"4e24af32-2b91-4772-b26e-a683ba5c3d16\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.431626 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4b55\" (UniqueName: \"kubernetes.io/projected/e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9-kube-api-access-t4b55\") pod \"openshift-controller-manager-operator-756b6f6bc6-rmpbk\" (UID: \"e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.435526 4620 request.go:700] Waited for 1.871897814s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.455590 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfwvq\" (UniqueName: \"kubernetes.io/projected/37d0c18f-2c13-4cb8-8523-853e305dfa47-kube-api-access-qfwvq\") pod \"authentication-operator-69f744f599-gt9s9\" (UID: \"37d0c18f-2c13-4cb8-8523-853e305dfa47\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.471183 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4jww\" (UniqueName: \"kubernetes.io/projected/e787f3bd-fa49-4961-9556-f5f0f25fca06-kube-api-access-d4jww\") pod \"dns-operator-744455d44c-8grgt\" (UID: \"e787f3bd-fa49-4961-9556-f5f0f25fca06\") " pod="openshift-dns-operator/dns-operator-744455d44c-8grgt" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.496112 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lgkk\" (UniqueName: \"kubernetes.io/projected/aa662f18-6ab4-43b8-8e65-8de41043b74d-kube-api-access-2lgkk\") pod \"console-f9d7485db-z57gf\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.511350 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rghwj\" (UniqueName: \"kubernetes.io/projected/8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5-kube-api-access-rghwj\") pod \"downloads-7954f5f757-gwr5k\" (UID: \"8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5\") " pod="openshift-console/downloads-7954f5f757-gwr5k" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.522316 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.530256 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85gmx\" (UniqueName: \"kubernetes.io/projected/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-kube-api-access-85gmx\") pod \"oauth-openshift-558db77b4-8q7cm\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.544287 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.550446 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gwr5k" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.551224 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwn95\" (UniqueName: \"kubernetes.io/projected/b741ee63-8420-4991-9682-69a55770d9c6-kube-api-access-kwn95\") pod \"cluster-image-registry-operator-dc59b4c8b-sjk2k\" (UID: \"b741ee63-8420-4991-9682-69a55770d9c6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.569907 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npghp\" (UniqueName: \"kubernetes.io/projected/8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd-kube-api-access-npghp\") pod \"apiserver-76f77b778f-x75k8\" (UID: \"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd\") " pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.571650 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.591967 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.599831 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-976x2\" (UniqueName: \"kubernetes.io/projected/02e136c3-8346-4d92-bc4f-fbee60798447-kube-api-access-976x2\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwkt\" (UID: \"02e136c3-8346-4d92-bc4f-fbee60798447\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.610383 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8grgt" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.623489 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdgff\" (UniqueName: \"kubernetes.io/projected/02381b1b-463c-4532-a690-deee86ffc674-kube-api-access-wdgff\") pod \"machine-api-operator-5694c8668f-swx9b\" (UID: \"02381b1b-463c-4532-a690-deee86ffc674\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.623719 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.645326 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wldms\" (UniqueName: \"kubernetes.io/projected/bfd69ce1-51c3-44b0-81e5-576a633bff91-kube-api-access-wldms\") pod \"machine-approver-56656f9798-tgs4s\" (UID: \"bfd69ce1-51c3-44b0-81e5-576a633bff91\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.647889 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.657855 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hncd6\" (UniqueName: \"kubernetes.io/projected/9b19c06b-4261-4573-a304-a28976a1c610-kube-api-access-hncd6\") pod \"ingress-canary-8srf4\" (UID: \"9b19c06b-4261-4573-a304-a28976a1c610\") " pod="openshift-ingress-canary/ingress-canary-8srf4" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.672645 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.673766 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/737e8cb6-704f-4255-8985-ee18874f127d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w8bnd\" (UID: \"737e8cb6-704f-4255-8985-ee18874f127d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.699178 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9295f40-e604-4f7f-9f79-f44f1a5f9bc3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z8v2d\" (UID: \"e9295f40-e604-4f7f-9f79-f44f1a5f9bc3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.712177 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84xcd\" (UniqueName: \"kubernetes.io/projected/55fa1078-ff94-4efb-80ea-3bcf65192874-kube-api-access-84xcd\") pod \"ingress-operator-5b745b69d9-c8czx\" (UID: \"55fa1078-ff94-4efb-80ea-3bcf65192874\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.715960 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.735848 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.740094 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l2nj\" (UniqueName: \"kubernetes.io/projected/0b2a43de-fb0b-4468-af29-c7436e08fb13-kube-api-access-2l2nj\") pod \"etcd-operator-b45778765-sqrtn\" (UID: \"0b2a43de-fb0b-4468-af29-c7436e08fb13\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.743881 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.753543 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8h784"] Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.760594 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" Jan 29 15:03:50 crc kubenswrapper[4620]: W0129 15:03:50.774460 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc6f4802_99e9_4a10_bdfb_f132a81023eb.slice/crio-6e6f191ecf253a0537c3d1ec206000394e9b93be94864318ee0ed3c8463196f7 WatchSource:0}: Error finding container 6e6f191ecf253a0537c3d1ec206000394e9b93be94864318ee0ed3c8463196f7: Status 404 returned error can't find the container with id 6e6f191ecf253a0537c3d1ec206000394e9b93be94864318ee0ed3c8463196f7 Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.778260 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7vv7\" (UniqueName: \"kubernetes.io/projected/cc4969bb-ac64-4361-8666-99de6de39271-kube-api-access-x7vv7\") pod \"router-default-5444994796-5ppqg\" (UID: \"cc4969bb-ac64-4361-8666-99de6de39271\") " pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.781811 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krnkn\" (UniqueName: \"kubernetes.io/projected/0df986bd-6456-42fd-b063-301263d6e7ce-kube-api-access-krnkn\") pod \"migrator-59844c95c7-5lxk8\" (UID: \"0df986bd-6456-42fd-b063-301263d6e7ce\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5lxk8" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.797528 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-z57gf"] Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.812960 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qq2j\" (UniqueName: \"kubernetes.io/projected/73c298d0-47e0-4026-bb37-871f2357393d-kube-api-access-6qq2j\") pod \"catalog-operator-68c6474976-5h6r9\" (UID: \"73c298d0-47e0-4026-bb37-871f2357393d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.814265 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-twqvf" event={"ID":"82634d3f-d985-4384-bd37-426d509d4e57","Type":"ContainerStarted","Data":"37b28806b19dff8848440addfd8ffa871323e7993a204d8c53416ba206ac8124"} Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.814296 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-twqvf" event={"ID":"82634d3f-d985-4384-bd37-426d509d4e57","Type":"ContainerStarted","Data":"2750911920f02f9f80b80eb38bc38f7a7d3a118feadeb404ed7edf6d3c62b121"} Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.818349 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h6vq\" (UniqueName: \"kubernetes.io/projected/f35cc7a6-91ec-4a66-8bb1-6418c351eaf3-kube-api-access-5h6vq\") pod \"olm-operator-6b444d44fb-lftvg\" (UID: \"f35cc7a6-91ec-4a66-8bb1-6418c351eaf3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:03:50 crc kubenswrapper[4620]: W0129 15:03:50.822039 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa662f18_6ab4_43b8_8e65_8de41043b74d.slice/crio-dc4b855e65c23c1e3f912506129d8790e79d40b449d5ef637ba5e7458b64e57d WatchSource:0}: Error finding container dc4b855e65c23c1e3f912506129d8790e79d40b449d5ef637ba5e7458b64e57d: Status 404 returned error can't find the container with id dc4b855e65c23c1e3f912506129d8790e79d40b449d5ef637ba5e7458b64e57d Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.840215 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55fa1078-ff94-4efb-80ea-3bcf65192874-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c8czx\" (UID: \"55fa1078-ff94-4efb-80ea-3bcf65192874\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.857745 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75a509e8-6d94-4a62-b97d-ea05b87b1b74-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5qflb\" (UID: \"75a509e8-6d94-4a62-b97d-ea05b87b1b74\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.863050 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2j567"] Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.874374 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.878892 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt5vh\" (UniqueName: \"kubernetes.io/projected/cbc1d8f6-fc70-4349-b70a-44751f67425f-kube-api-access-vt5vh\") pod \"machine-config-controller-84d6567774-4cnjz\" (UID: \"cbc1d8f6-fc70-4349-b70a-44751f67425f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.879829 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.881912 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8srf4" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.895859 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxlfr\" (UniqueName: \"kubernetes.io/projected/d44c8c1f-8400-4291-b4c0-fa7d8a8d584a-kube-api-access-hxlfr\") pod \"package-server-manager-789f6589d5-cl5pv\" (UID: \"d44c8c1f-8400-4291-b4c0-fa7d8a8d584a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.896467 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.913794 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrp29\" (UniqueName: \"kubernetes.io/projected/713c81b7-8d56-4d58-bd4e-f827de0ca17b-kube-api-access-hrp29\") pod \"collect-profiles-29494980-dkmz9\" (UID: \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.916854 4620 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-phzrb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.916885 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" podUID="58f39c40-ed87-43dd-90d0-d892d4a56375" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.933550 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.946480 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.949314 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.955344 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x" event={"ID":"9b6421d4-a97f-4867-a78c-50ba4d6486ea","Type":"ContainerStarted","Data":"12c94c09ceea15d068aa36b93f0755668b82b7ea098fbeb44ce96d3d8405ffac"} Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.955388 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x" event={"ID":"9b6421d4-a97f-4867-a78c-50ba4d6486ea","Type":"ContainerStarted","Data":"eda31ca4c5ddebe73dc9578f16c696f99f7ddd1e3938c73a61db33fbd14b873a"} Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.955400 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8h784" event={"ID":"bc6f4802-99e9-4a10-bdfb-f132a81023eb","Type":"ContainerStarted","Data":"6e6f191ecf253a0537c3d1ec206000394e9b93be94864318ee0ed3c8463196f7"} Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.962678 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txnh5\" (UniqueName: \"kubernetes.io/projected/ea9233e9-49ad-4164-a7c5-1d02ca87560a-kube-api-access-txnh5\") pod \"kube-storage-version-migrator-operator-b67b599dd-thq9h\" (UID: \"ea9233e9-49ad-4164-a7c5-1d02ca87560a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.978197 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv8zz\" (UniqueName: \"kubernetes.io/projected/82284043-1bad-4d00-ab41-f15f62b48fdb-kube-api-access-lv8zz\") pod \"service-ca-operator-777779d784-jfjwn\" (UID: \"82284043-1bad-4d00-ab41-f15f62b48fdb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.978337 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9v4l\" (UniqueName: \"kubernetes.io/projected/71877c8f-d05b-42ca-ad80-7ee3277d9558-kube-api-access-h9v4l\") pod \"packageserver-d55dfcdfc-tzb9r\" (UID: \"71877c8f-d05b-42ca-ad80-7ee3277d9558\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:50 crc kubenswrapper[4620]: I0129 15:03:50.982999 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gwr5k"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.000739 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dz9m\" (UniqueName: \"kubernetes.io/projected/7436f3bf-66e4-4314-aa0b-8af645dd5bee-kube-api-access-5dz9m\") pod \"marketplace-operator-79b997595-zsh8m\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.017037 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.025912 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5lxk8" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.028722 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.038719 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlqdq\" (UniqueName: \"kubernetes.io/projected/c5b2667b-8dba-45fe-8bc5-ca0289d36d96-kube-api-access-zlqdq\") pod \"machine-config-operator-74547568cd-xbhzj\" (UID: \"c5b2667b-8dba-45fe-8bc5-ca0289d36d96\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.057622 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsvpk\" (UniqueName: \"kubernetes.io/projected/cb416312-125b-4c11-9ecd-07a06d5e6c02-kube-api-access-zsvpk\") pod \"dns-default-bgtck\" (UID: \"cb416312-125b-4c11-9ecd-07a06d5e6c02\") " pod="openshift-dns/dns-default-bgtck" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.058319 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.061013 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcxk7\" (UniqueName: \"kubernetes.io/projected/e1ad943a-d557-4727-b4e9-a863aae1a47d-kube-api-access-pcxk7\") pod \"control-plane-machine-set-operator-78cbb6b69f-27rrp\" (UID: \"e1ad943a-d557-4727-b4e9-a863aae1a47d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.071358 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.072060 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.079103 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.098677 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.115510 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.125280 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.134534 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149213 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c8a22e2b-bb08-4b1f-964b-ff2651c3ea34-certs\") pod \"machine-config-server-jvvlj\" (UID: \"c8a22e2b-bb08-4b1f-964b-ff2651c3ea34\") " pod="openshift-machine-config-operator/machine-config-server-jvvlj" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149268 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e8c35b0-3703-4eff-8610-6933e4b7b391-trusted-ca\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149294 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrljf\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-kube-api-access-nrljf\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149325 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149344 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6453eca8-faa2-4fae-821a-246b72626d90-signing-cabundle\") pod \"service-ca-9c57cc56f-gdsbs\" (UID: \"6453eca8-faa2-4fae-821a-246b72626d90\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149372 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8e8c35b0-3703-4eff-8610-6933e4b7b391-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149389 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmk87\" (UniqueName: \"kubernetes.io/projected/c8a22e2b-bb08-4b1f-964b-ff2651c3ea34-kube-api-access-jmk87\") pod \"machine-config-server-jvvlj\" (UID: \"c8a22e2b-bb08-4b1f-964b-ff2651c3ea34\") " pod="openshift-machine-config-operator/machine-config-server-jvvlj" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149405 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8e8c35b0-3703-4eff-8610-6933e4b7b391-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149431 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5clmz\" (UniqueName: \"kubernetes.io/projected/1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc-kube-api-access-5clmz\") pod \"multus-admission-controller-857f4d67dd-ncw89\" (UID: \"1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ncw89" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149446 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6453eca8-faa2-4fae-821a-246b72626d90-signing-key\") pod \"service-ca-9c57cc56f-gdsbs\" (UID: \"6453eca8-faa2-4fae-821a-246b72626d90\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149462 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-registry-tls\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149476 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-bound-sa-token\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149492 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ncw89\" (UID: \"1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ncw89" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149515 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp58x\" (UniqueName: \"kubernetes.io/projected/6453eca8-faa2-4fae-821a-246b72626d90-kube-api-access-sp58x\") pod \"service-ca-9c57cc56f-gdsbs\" (UID: \"6453eca8-faa2-4fae-821a-246b72626d90\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149543 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c8a22e2b-bb08-4b1f-964b-ff2651c3ea34-node-bootstrap-token\") pod \"machine-config-server-jvvlj\" (UID: \"c8a22e2b-bb08-4b1f-964b-ff2651c3ea34\") " pod="openshift-machine-config-operator/machine-config-server-jvvlj" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.149565 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8e8c35b0-3703-4eff-8610-6933e4b7b391-registry-certificates\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: E0129 15:03:51.150112 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:51.65010233 +0000 UTC m=+172.262929975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.159221 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.159416 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.163043 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bgtck" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.222521 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8q7cm"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250167 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:51 crc kubenswrapper[4620]: E0129 15:03:51.250331 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:51.750284797 +0000 UTC m=+172.363112442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250427 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8e8c35b0-3703-4eff-8610-6933e4b7b391-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250464 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmk87\" (UniqueName: \"kubernetes.io/projected/c8a22e2b-bb08-4b1f-964b-ff2651c3ea34-kube-api-access-jmk87\") pod \"machine-config-server-jvvlj\" (UID: \"c8a22e2b-bb08-4b1f-964b-ff2651c3ea34\") " pod="openshift-machine-config-operator/machine-config-server-jvvlj" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250497 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8e8c35b0-3703-4eff-8610-6933e4b7b391-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250585 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5clmz\" (UniqueName: \"kubernetes.io/projected/1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc-kube-api-access-5clmz\") pod \"multus-admission-controller-857f4d67dd-ncw89\" (UID: \"1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ncw89" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250614 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6453eca8-faa2-4fae-821a-246b72626d90-signing-key\") pod \"service-ca-9c57cc56f-gdsbs\" (UID: \"6453eca8-faa2-4fae-821a-246b72626d90\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250674 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqsw4\" (UniqueName: \"kubernetes.io/projected/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-kube-api-access-qqsw4\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250713 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-registry-tls\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250728 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-bound-sa-token\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250774 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ncw89\" (UID: \"1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ncw89" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250809 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp58x\" (UniqueName: \"kubernetes.io/projected/6453eca8-faa2-4fae-821a-246b72626d90-kube-api-access-sp58x\") pod \"service-ca-9c57cc56f-gdsbs\" (UID: \"6453eca8-faa2-4fae-821a-246b72626d90\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250834 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-mountpoint-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250867 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c8a22e2b-bb08-4b1f-964b-ff2651c3ea34-node-bootstrap-token\") pod \"machine-config-server-jvvlj\" (UID: \"c8a22e2b-bb08-4b1f-964b-ff2651c3ea34\") " pod="openshift-machine-config-operator/machine-config-server-jvvlj" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250886 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-csi-data-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250943 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8e8c35b0-3703-4eff-8610-6933e4b7b391-registry-certificates\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250960 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-plugins-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.250999 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c8a22e2b-bb08-4b1f-964b-ff2651c3ea34-certs\") pod \"machine-config-server-jvvlj\" (UID: \"c8a22e2b-bb08-4b1f-964b-ff2651c3ea34\") " pod="openshift-machine-config-operator/machine-config-server-jvvlj" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.251059 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e8c35b0-3703-4eff-8610-6933e4b7b391-trusted-ca\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.251123 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-socket-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.251150 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrljf\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-kube-api-access-nrljf\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.251205 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.251231 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-registration-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.251247 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6453eca8-faa2-4fae-821a-246b72626d90-signing-cabundle\") pod \"service-ca-9c57cc56f-gdsbs\" (UID: \"6453eca8-faa2-4fae-821a-246b72626d90\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.252051 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8e8c35b0-3703-4eff-8610-6933e4b7b391-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.260482 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8e8c35b0-3703-4eff-8610-6933e4b7b391-registry-certificates\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.260875 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8e8c35b0-3703-4eff-8610-6933e4b7b391-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.263232 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6453eca8-faa2-4fae-821a-246b72626d90-signing-cabundle\") pod \"service-ca-9c57cc56f-gdsbs\" (UID: \"6453eca8-faa2-4fae-821a-246b72626d90\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" Jan 29 15:03:51 crc kubenswrapper[4620]: E0129 15:03:51.263480 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:51.763466242 +0000 UTC m=+172.376293887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.271173 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e8c35b0-3703-4eff-8610-6933e4b7b391-trusted-ca\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.284634 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-registry-tls\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.288554 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.290519 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c8a22e2b-bb08-4b1f-964b-ff2651c3ea34-node-bootstrap-token\") pod \"machine-config-server-jvvlj\" (UID: \"c8a22e2b-bb08-4b1f-964b-ff2651c3ea34\") " pod="openshift-machine-config-operator/machine-config-server-jvvlj" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.296460 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6453eca8-faa2-4fae-821a-246b72626d90-signing-key\") pod \"service-ca-9c57cc56f-gdsbs\" (UID: \"6453eca8-faa2-4fae-821a-246b72626d90\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.297083 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp58x\" (UniqueName: \"kubernetes.io/projected/6453eca8-faa2-4fae-821a-246b72626d90-kube-api-access-sp58x\") pod \"service-ca-9c57cc56f-gdsbs\" (UID: \"6453eca8-faa2-4fae-821a-246b72626d90\") " pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.298449 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-ncw89\" (UID: \"1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ncw89" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.300043 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c8a22e2b-bb08-4b1f-964b-ff2651c3ea34-certs\") pod \"machine-config-server-jvvlj\" (UID: \"c8a22e2b-bb08-4b1f-964b-ff2651c3ea34\") " pod="openshift-machine-config-operator/machine-config-server-jvvlj" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.307260 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmk87\" (UniqueName: \"kubernetes.io/projected/c8a22e2b-bb08-4b1f-964b-ff2651c3ea34-kube-api-access-jmk87\") pod \"machine-config-server-jvvlj\" (UID: \"c8a22e2b-bb08-4b1f-964b-ff2651c3ea34\") " pod="openshift-machine-config-operator/machine-config-server-jvvlj" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.323112 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.328683 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.328723 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrljf\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-kube-api-access-nrljf\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.353946 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.354136 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-socket-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: E0129 15:03:51.354197 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:51.854167311 +0000 UTC m=+172.466994956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.354279 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.354308 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-registration-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.354358 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-socket-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.354415 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqsw4\" (UniqueName: \"kubernetes.io/projected/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-kube-api-access-qqsw4\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.354457 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-mountpoint-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.354486 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-csi-data-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.354519 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-plugins-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: E0129 15:03:51.354612 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:51.854601075 +0000 UTC m=+172.467428720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.354627 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-plugins-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.354690 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-registration-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.354894 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-mountpoint-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.354964 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-csi-data-dir\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.355154 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x75k8"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.357446 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5clmz\" (UniqueName: \"kubernetes.io/projected/1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc-kube-api-access-5clmz\") pod \"multus-admission-controller-857f4d67dd-ncw89\" (UID: \"1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-ncw89" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.376700 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-bound-sa-token\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.392448 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.400191 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-ncw89" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.419913 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqsw4\" (UniqueName: \"kubernetes.io/projected/11cea8f7-7594-45c6-8f0c-77caad6bdbc5-kube-api-access-qqsw4\") pod \"csi-hostpathplugin-lzb7h\" (UID: \"11cea8f7-7594-45c6-8f0c-77caad6bdbc5\") " pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.467562 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:51 crc kubenswrapper[4620]: E0129 15:03:51.468110 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:51.968089711 +0000 UTC m=+172.580917356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.485003 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jvvlj" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.521460 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.539487 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gt9s9"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.568916 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: E0129 15:03:51.569206 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:52.069195578 +0000 UTC m=+172.682023223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.589172 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.601264 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.602126 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d"] Jan 29 15:03:51 crc kubenswrapper[4620]: W0129 15:03:51.655571 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cec96d0_b29c_4fc4_a8b6_a8e5e8db65cd.slice/crio-67b6868ed876e6b8e032904b4c5b35af56147e541f32650600fc26204ae69ad4 WatchSource:0}: Error finding container 67b6868ed876e6b8e032904b4c5b35af56147e541f32650600fc26204ae69ad4: Status 404 returned error can't find the container with id 67b6868ed876e6b8e032904b4c5b35af56147e541f32650600fc26204ae69ad4 Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.661259 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8grgt"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.674131 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:51 crc kubenswrapper[4620]: E0129 15:03:51.676225 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:52.176168439 +0000 UTC m=+172.788996084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:51 crc kubenswrapper[4620]: W0129 15:03:51.699122 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37d0c18f_2c13_4cb8_8523_853e305dfa47.slice/crio-05f624097927fa29822cacdf6c07dbfdedecdbdd5c094965f039f18d186e31b0 WatchSource:0}: Error finding container 05f624097927fa29822cacdf6c07dbfdedecdbdd5c094965f039f18d186e31b0: Status 404 returned error can't find the container with id 05f624097927fa29822cacdf6c07dbfdedecdbdd5c094965f039f18d186e31b0 Jan 29 15:03:51 crc kubenswrapper[4620]: W0129 15:03:51.702305 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9295f40_e604_4f7f_9f79_f44f1a5f9bc3.slice/crio-73ce92464f2c7e12764901a1fbd0a150d09a4e25d006f777166c3cdc9069f243 WatchSource:0}: Error finding container 73ce92464f2c7e12764901a1fbd0a150d09a4e25d006f777166c3cdc9069f243: Status 404 returned error can't find the container with id 73ce92464f2c7e12764901a1fbd0a150d09a4e25d006f777166c3cdc9069f243 Jan 29 15:03:51 crc kubenswrapper[4620]: W0129 15:03:51.709054 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod737e8cb6_704f_4255_8985_ee18874f127d.slice/crio-a3f15b4051ac565edc674d7aee7a09e7ce2791cc8f88b2a73b339007f7625bf1 WatchSource:0}: Error finding container a3f15b4051ac565edc674d7aee7a09e7ce2791cc8f88b2a73b339007f7625bf1: Status 404 returned error can't find the container with id a3f15b4051ac565edc674d7aee7a09e7ce2791cc8f88b2a73b339007f7625bf1 Jan 29 15:03:51 crc kubenswrapper[4620]: W0129 15:03:51.715831 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb741ee63_8420_4991_9682_69a55770d9c6.slice/crio-7de77c8c7b78c278fddf10cc19041ad296a7005848be02daae2186f0d246b0e5 WatchSource:0}: Error finding container 7de77c8c7b78c278fddf10cc19041ad296a7005848be02daae2186f0d246b0e5: Status 404 returned error can't find the container with id 7de77c8c7b78c278fddf10cc19041ad296a7005848be02daae2186f0d246b0e5 Jan 29 15:03:51 crc kubenswrapper[4620]: W0129 15:03:51.721205 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode787f3bd_fa49_4961_9556_f5f0f25fca06.slice/crio-d920c362279709eebe0cf04faa259f8051694d7aa68b59884d4c5294dba75bc2 WatchSource:0}: Error finding container d920c362279709eebe0cf04faa259f8051694d7aa68b59884d4c5294dba75bc2: Status 404 returned error can't find the container with id d920c362279709eebe0cf04faa259f8051694d7aa68b59884d4c5294dba75bc2 Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.773043 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8srf4"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.783672 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:51 crc kubenswrapper[4620]: E0129 15:03:51.784051 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:52.284033458 +0000 UTC m=+172.896861103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.791553 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.893335 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:51 crc kubenswrapper[4620]: E0129 15:03:51.896043 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:52.396016087 +0000 UTC m=+173.008843742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.920801 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gwr5k" event={"ID":"8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5","Type":"ContainerStarted","Data":"04cc8965e46c4d2ecf2656c4a3c5f3f8b1a9a3986f54db6bbd6ed3af12f7448a"} Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.926535 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sqrtn"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.926706 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" event={"ID":"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2","Type":"ContainerStarted","Data":"ec98a91fd2fe72f0344b66f9c97effb80647b792711b180fb1450ec88cf2ec07"} Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.927991 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8h784" event={"ID":"bc6f4802-99e9-4a10-bdfb-f132a81023eb","Type":"ContainerStarted","Data":"6d454ea01b04ce30bb4604e305c94e197f0271beec9f1141c2ff7264d5b0a1b0"} Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.929645 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.934300 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" event={"ID":"e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9","Type":"ContainerStarted","Data":"58aa6884f9a991fd24344e09b4be14d6970bde960e49fcb9f95099a8cbcfcf65"} Jan 29 15:03:51 crc kubenswrapper[4620]: W0129 15:03:51.934385 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b19c06b_4261_4573_a304_a28976a1c610.slice/crio-3b5cbf151334915eabd07cf23803c3df13d178535cbee4346fed37746f962aa4 WatchSource:0}: Error finding container 3b5cbf151334915eabd07cf23803c3df13d178535cbee4346fed37746f962aa4: Status 404 returned error can't find the container with id 3b5cbf151334915eabd07cf23803c3df13d178535cbee4346fed37746f962aa4 Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.935740 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" event={"ID":"737e8cb6-704f-4255-8985-ee18874f127d","Type":"ContainerStarted","Data":"a3f15b4051ac565edc674d7aee7a09e7ce2791cc8f88b2a73b339007f7625bf1"} Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.939142 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" event={"ID":"37d0c18f-2c13-4cb8-8523-853e305dfa47","Type":"ContainerStarted","Data":"05f624097927fa29822cacdf6c07dbfdedecdbdd5c094965f039f18d186e31b0"} Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.939936 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" event={"ID":"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b","Type":"ContainerStarted","Data":"8961cbe9ec40bfd7cf3366d7bf064a06ba473f5cfb39f9bf0e058c6c34b09282"} Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.945405 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5lxk8"] Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.970648 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z57gf" event={"ID":"aa662f18-6ab4-43b8-8e65-8de41043b74d","Type":"ContainerStarted","Data":"2704f88e1799bacba75f2bfa870b35017328409f120ee96b0a38a88ef4ac79a7"} Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.970688 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z57gf" event={"ID":"aa662f18-6ab4-43b8-8e65-8de41043b74d","Type":"ContainerStarted","Data":"dc4b855e65c23c1e3f912506129d8790e79d40b449d5ef637ba5e7458b64e57d"} Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.972348 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" event={"ID":"bfd69ce1-51c3-44b0-81e5-576a633bff91","Type":"ContainerStarted","Data":"36a8fadf96309990e09a56caa7d06ce3cb69b5d855647669818ca29d9a602063"} Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.973003 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x75k8" event={"ID":"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd","Type":"ContainerStarted","Data":"67b6868ed876e6b8e032904b4c5b35af56147e541f32650600fc26204ae69ad4"} Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.973948 4620 patch_prober.go:28] interesting pod/console-operator-58897d9998-8h784 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.974088 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8h784" podUID="bc6f4802-99e9-4a10-bdfb-f132a81023eb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.974461 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x" event={"ID":"9b6421d4-a97f-4867-a78c-50ba4d6486ea","Type":"ContainerStarted","Data":"c7c5cb5189578f3cfa30202920390b1d6b82ef40fe24da57a1044c67b8729672"} Jan 29 15:03:51 crc kubenswrapper[4620]: I0129 15:03:51.983463 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8grgt" event={"ID":"e787f3bd-fa49-4961-9556-f5f0f25fca06","Type":"ContainerStarted","Data":"d920c362279709eebe0cf04faa259f8051694d7aa68b59884d4c5294dba75bc2"} Jan 29 15:03:51 crc kubenswrapper[4620]: W0129 15:03:51.985432 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02e136c3_8346_4d92_bc4f_fbee60798447.slice/crio-ae610331d2d6910f3ca1e0adc9737bdc1a7e1ae671a406a6af626127dbf9a169 WatchSource:0}: Error finding container ae610331d2d6910f3ca1e0adc9737bdc1a7e1ae671a406a6af626127dbf9a169: Status 404 returned error can't find the container with id ae610331d2d6910f3ca1e0adc9737bdc1a7e1ae671a406a6af626127dbf9a169 Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.016498 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:52 crc kubenswrapper[4620]: E0129 15:03:52.017168 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:52.517128034 +0000 UTC m=+173.129955739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.018142 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-swx9b"] Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.027147 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5ppqg" event={"ID":"cc4969bb-ac64-4361-8666-99de6de39271","Type":"ContainerStarted","Data":"43e63990157a2357e8bac639251ae89e06a80c52d82923fdf760120c002c2015"} Jan 29 15:03:52 crc kubenswrapper[4620]: W0129 15:03:52.047913 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b2a43de_fb0b_4468_af29_c7436e08fb13.slice/crio-e5d10af6738659bac02abd671c9cad9ac7e30c02fa1692f674521de4eec1dbf2 WatchSource:0}: Error finding container e5d10af6738659bac02abd671c9cad9ac7e30c02fa1692f674521de4eec1dbf2: Status 404 returned error can't find the container with id e5d10af6738659bac02abd671c9cad9ac7e30c02fa1692f674521de4eec1dbf2 Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.058251 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" event={"ID":"f9d96dcf-b094-485c-8636-401ccc71e918","Type":"ContainerStarted","Data":"321c7ce6490981b385b9380849cfd37e80cdc6c7998c962d10d93bb8b491b30c"} Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.087862 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" event={"ID":"4e24af32-2b91-4772-b26e-a683ba5c3d16","Type":"ContainerStarted","Data":"8f9f07a5100aef57a5b3265a0093dd5719e335b5e8f47ddb2994da913286cd50"} Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.117319 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:52 crc kubenswrapper[4620]: E0129 15:03:52.118396 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:52.618371625 +0000 UTC m=+173.231199280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.127819 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r"] Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.146653 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h"] Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.176942 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" podStartSLOduration=144.17692437 podStartE2EDuration="2m24.17692437s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:52.17535025 +0000 UTC m=+172.788177915" watchObservedRunningTime="2026-01-29 15:03:52.17692437 +0000 UTC m=+172.789752015" Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.189917 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb"] Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.214726 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" event={"ID":"b741ee63-8420-4991-9682-69a55770d9c6","Type":"ContainerStarted","Data":"7de77c8c7b78c278fddf10cc19041ad296a7005848be02daae2186f0d246b0e5"} Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.225941 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:52 crc kubenswrapper[4620]: E0129 15:03:52.226290 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:52.726274405 +0000 UTC m=+173.339102050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.230513 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz"] Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.232241 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx"] Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.251358 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" event={"ID":"e9295f40-e604-4f7f-9f79-f44f1a5f9bc3","Type":"ContainerStarted","Data":"73ce92464f2c7e12764901a1fbd0a150d09a4e25d006f777166c3cdc9069f243"} Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.314168 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv"] Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.315336 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bgtck"] Jan 29 15:03:52 crc kubenswrapper[4620]: W0129 15:03:52.333275 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71877c8f_d05b_42ca_ad80_7ee3277d9558.slice/crio-2344f8663e2bd43f4713a42d62d45d9be007af474096c6310703bf6e74da03d4 WatchSource:0}: Error finding container 2344f8663e2bd43f4713a42d62d45d9be007af474096c6310703bf6e74da03d4: Status 404 returned error can't find the container with id 2344f8663e2bd43f4713a42d62d45d9be007af474096c6310703bf6e74da03d4 Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.335450 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:52 crc kubenswrapper[4620]: E0129 15:03:52.336094 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:52.836075946 +0000 UTC m=+173.448903591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.376601 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg"] Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.436918 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:52 crc kubenswrapper[4620]: E0129 15:03:52.438314 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:52.938301597 +0000 UTC m=+173.551129242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.464443 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-twqvf" podStartSLOduration=145.464426001 podStartE2EDuration="2m25.464426001s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:52.41553261 +0000 UTC m=+173.028360255" watchObservedRunningTime="2026-01-29 15:03:52.464426001 +0000 UTC m=+173.077253646" Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.560395 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:52 crc kubenswrapper[4620]: E0129 15:03:52.561997 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:53.061981415 +0000 UTC m=+173.674809060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:52 crc kubenswrapper[4620]: W0129 15:03:52.596645 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf35cc7a6_91ec_4a66_8bb1_6418c351eaf3.slice/crio-0acafb681f37e94f0f2385ed4a03a45b62f60b794c141e65d0d94e31de75b512 WatchSource:0}: Error finding container 0acafb681f37e94f0f2385ed4a03a45b62f60b794c141e65d0d94e31de75b512: Status 404 returned error can't find the container with id 0acafb681f37e94f0f2385ed4a03a45b62f60b794c141e65d0d94e31de75b512 Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.662907 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:52 crc kubenswrapper[4620]: E0129 15:03:52.663546 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:53.163533526 +0000 UTC m=+173.776361171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.706515 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp"] Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.764390 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:52 crc kubenswrapper[4620]: E0129 15:03:52.764539 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:53.264506788 +0000 UTC m=+173.877334433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.766332 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:52 crc kubenswrapper[4620]: E0129 15:03:52.766692 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:53.266680936 +0000 UTC m=+173.879508581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.770237 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj"] Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.849623 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9"] Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.873746 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:52 crc kubenswrapper[4620]: E0129 15:03:52.874197 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:53.374179055 +0000 UTC m=+173.987006700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.907793 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-z57gf" podStartSLOduration=145.907775633 podStartE2EDuration="2m25.907775633s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:52.905940885 +0000 UTC m=+173.518768530" watchObservedRunningTime="2026-01-29 15:03:52.907775633 +0000 UTC m=+173.520603288" Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.913320 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zsh8m"] Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.920391 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn"] Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.930731 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5ppqg" podStartSLOduration=144.930692046 podStartE2EDuration="2m24.930692046s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:52.928785655 +0000 UTC m=+173.541613310" watchObservedRunningTime="2026-01-29 15:03:52.930692046 +0000 UTC m=+173.543519691" Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.936485 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.936625 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.936654 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.966987 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8h784" podStartSLOduration=145.966970899 podStartE2EDuration="2m25.966970899s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:52.96639016 +0000 UTC m=+173.579217805" watchObservedRunningTime="2026-01-29 15:03:52.966970899 +0000 UTC m=+173.579798544" Jan 29 15:03:52 crc kubenswrapper[4620]: I0129 15:03:52.976657 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:52 crc kubenswrapper[4620]: E0129 15:03:52.977252 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:53.477217892 +0000 UTC m=+174.090045537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.007487 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qpx6x" podStartSLOduration=146.007473115 podStartE2EDuration="2m26.007473115s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:53.006918387 +0000 UTC m=+173.619746032" watchObservedRunningTime="2026-01-29 15:03:53.007473115 +0000 UTC m=+173.620300760" Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.032664 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-ncw89"] Jan 29 15:03:53 crc kubenswrapper[4620]: W0129 15:03:53.062900 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82284043_1bad_4d00_ab41_f15f62b48fdb.slice/crio-b03a847e3e834d4b5ba949a174a7a8662ba18d03b09bada9db66b45f8a83dc9c WatchSource:0}: Error finding container b03a847e3e834d4b5ba949a174a7a8662ba18d03b09bada9db66b45f8a83dc9c: Status 404 returned error can't find the container with id b03a847e3e834d4b5ba949a174a7a8662ba18d03b09bada9db66b45f8a83dc9c Jan 29 15:03:53 crc kubenswrapper[4620]: W0129 15:03:53.076726 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73c298d0_47e0_4026_bb37_871f2357393d.slice/crio-8e6e3d9e4afc42f091306fcd60193d60c89dc9cbbf7c5668a2726e7efd8950cd WatchSource:0}: Error finding container 8e6e3d9e4afc42f091306fcd60193d60c89dc9cbbf7c5668a2726e7efd8950cd: Status 404 returned error can't find the container with id 8e6e3d9e4afc42f091306fcd60193d60c89dc9cbbf7c5668a2726e7efd8950cd Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.077267 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:53 crc kubenswrapper[4620]: E0129 15:03:53.077602 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:53.577588625 +0000 UTC m=+174.190416270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.153101 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gdsbs"] Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.161737 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9"] Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.167915 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lzb7h"] Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.179550 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:53 crc kubenswrapper[4620]: E0129 15:03:53.179927 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:53.67991237 +0000 UTC m=+174.292740015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:53 crc kubenswrapper[4620]: W0129 15:03:53.249481 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6453eca8_faa2_4fae_821a_246b72626d90.slice/crio-734f8003fdd762b233e23c69602fc0419ec9c08eb07c4c92fb13d2d93ca43300 WatchSource:0}: Error finding container 734f8003fdd762b233e23c69602fc0419ec9c08eb07c4c92fb13d2d93ca43300: Status 404 returned error can't find the container with id 734f8003fdd762b233e23c69602fc0419ec9c08eb07c4c92fb13d2d93ca43300 Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.280076 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:53 crc kubenswrapper[4620]: E0129 15:03:53.280384 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:53.780367346 +0000 UTC m=+174.393194991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.282425 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" event={"ID":"bfd69ce1-51c3-44b0-81e5-576a633bff91","Type":"ContainerStarted","Data":"9300974faedcaf1e20c01f77b05c6dee6d2745f75c1e7c05451a49bab9d387fe"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.297111 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" event={"ID":"cbc1d8f6-fc70-4349-b70a-44751f67425f","Type":"ContainerStarted","Data":"8026cb1c7fa330f445e91fdc6952665ca20352d98b47cd3c382e5a601fede079"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.301030 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" event={"ID":"71877c8f-d05b-42ca-ad80-7ee3277d9558","Type":"ContainerStarted","Data":"2344f8663e2bd43f4713a42d62d45d9be007af474096c6310703bf6e74da03d4"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.305169 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8srf4" event={"ID":"9b19c06b-4261-4573-a304-a28976a1c610","Type":"ContainerStarted","Data":"3b5cbf151334915eabd07cf23803c3df13d178535cbee4346fed37746f962aa4"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.308138 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" event={"ID":"ea9233e9-49ad-4164-a7c5-1d02ca87560a","Type":"ContainerStarted","Data":"0c93e640a399085fd39c7d8565e7c6fb86584f6c36da35d3cc8dd742b127cfee"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.311993 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" event={"ID":"37d0c18f-2c13-4cb8-8523-853e305dfa47","Type":"ContainerStarted","Data":"27dec63ed4ce7f004cf58d5a303fb73b284ac6727506ef24a5cdebe28d432bb5"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.314305 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5ppqg" event={"ID":"cc4969bb-ac64-4361-8666-99de6de39271","Type":"ContainerStarted","Data":"411ce740dad05509e0017469ca4fd87ed91269c4ec4c3e574383cdd786931a2a"} Jan 29 15:03:53 crc kubenswrapper[4620]: W0129 15:03:53.323262 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod713c81b7_8d56_4d58_bd4e_f827de0ca17b.slice/crio-1479c4d184a9b3b388c649d6224706a5ccddd29bf2155081add9fd2fe06223eb WatchSource:0}: Error finding container 1479c4d184a9b3b388c649d6224706a5ccddd29bf2155081add9fd2fe06223eb: Status 404 returned error can't find the container with id 1479c4d184a9b3b388c649d6224706a5ccddd29bf2155081add9fd2fe06223eb Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.330376 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jvvlj" event={"ID":"c8a22e2b-bb08-4b1f-964b-ff2651c3ea34","Type":"ContainerStarted","Data":"acb1fcf036b85ab4bfdfb4f559b2bfa4536126c4f32e435215643027e7b3e129"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.341771 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" event={"ID":"4e24af32-2b91-4772-b26e-a683ba5c3d16","Type":"ContainerStarted","Data":"dcb1b43146711441a877de4bf8bf3289d7daaef1b0063b25e530600a1f8169b4"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.347373 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bgtck" event={"ID":"cb416312-125b-4c11-9ecd-07a06d5e6c02","Type":"ContainerStarted","Data":"ec0539f80c47885021c204cb648e50b03e33157bcbfe71f68e8597d7e37724a3"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.352403 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5lxk8" event={"ID":"0df986bd-6456-42fd-b063-301263d6e7ce","Type":"ContainerStarted","Data":"9f6324cb1258f28d634a66fc88cf6b35ec3ffb629cf41f4abe2570121aa64b53"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.356675 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" event={"ID":"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b","Type":"ContainerStarted","Data":"2a1dd4bee19f06f8952a5f09eacf3cb5271e43b22b442efae03e4bfb5517e370"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.362191 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" event={"ID":"55fa1078-ff94-4efb-80ea-3bcf65192874","Type":"ContainerStarted","Data":"310ede20f5bbb9579b4f2bc92f14ec77ea89667b960d00f44e951a19fd821915"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.364081 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" event={"ID":"02381b1b-463c-4532-a690-deee86ffc674","Type":"ContainerStarted","Data":"f0fefdce58509cd30f8dba0db9d0c2f3603c201f1ccb69af8495f93365d1f7bb"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.379422 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" event={"ID":"02e136c3-8346-4d92-bc4f-fbee60798447","Type":"ContainerStarted","Data":"ae610331d2d6910f3ca1e0adc9737bdc1a7e1ae671a406a6af626127dbf9a169"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.381140 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.381341 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" event={"ID":"0b2a43de-fb0b-4468-af29-c7436e08fb13","Type":"ContainerStarted","Data":"e5d10af6738659bac02abd671c9cad9ac7e30c02fa1692f674521de4eec1dbf2"} Jan 29 15:03:53 crc kubenswrapper[4620]: E0129 15:03:53.381476 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:53.881463592 +0000 UTC m=+174.494291237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.384824 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gwr5k" event={"ID":"8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5","Type":"ContainerStarted","Data":"3769d0cda6df5ae4f92d74c36aac57c6b747704ccd19183b199fcd408e8f952d"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.385368 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gwr5k" Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.388043 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" event={"ID":"73c298d0-47e0-4026-bb37-871f2357393d","Type":"ContainerStarted","Data":"8e6e3d9e4afc42f091306fcd60193d60c89dc9cbbf7c5668a2726e7efd8950cd"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.389451 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" event={"ID":"f9d96dcf-b094-485c-8636-401ccc71e918","Type":"ContainerStarted","Data":"6a85540471008882969c933ba22ea3df1c58905bfb7812da960ace1c8418b25e"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.390783 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.395587 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" event={"ID":"82284043-1bad-4d00-ab41-f15f62b48fdb","Type":"ContainerStarted","Data":"b03a847e3e834d4b5ba949a174a7a8662ba18d03b09bada9db66b45f8a83dc9c"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.397907 4620 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwr5k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.397957 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwr5k" podUID="8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.407238 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" event={"ID":"11cea8f7-7594-45c6-8f0c-77caad6bdbc5","Type":"ContainerStarted","Data":"7f82112c729df624f20920170ff696c4143eee835fdfe3648648aa50040c8169"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.410695 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ncw89" event={"ID":"1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc","Type":"ContainerStarted","Data":"8bebb036ffd08afba06090433ca20911075a8099622002bf8923ad354e6871e5"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.418034 4620 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2j567 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.418079 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" podUID="f9d96dcf-b094-485c-8636-401ccc71e918" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.420520 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp" event={"ID":"e1ad943a-d557-4727-b4e9-a863aae1a47d","Type":"ContainerStarted","Data":"c693fd8891d9d9c01a3482ea60ab28b463f160ec22302bd4cff5cbf2be1c802a"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.436899 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" event={"ID":"d44c8c1f-8400-4291-b4c0-fa7d8a8d584a","Type":"ContainerStarted","Data":"e3acd5b58a4019264efaae318921df51242ce6e68019747ac46b682c38ad0aa1"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.437747 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" event={"ID":"6453eca8-faa2-4fae-821a-246b72626d90","Type":"ContainerStarted","Data":"734f8003fdd762b233e23c69602fc0419ec9c08eb07c4c92fb13d2d93ca43300"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.453149 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" event={"ID":"c5b2667b-8dba-45fe-8bc5-ca0289d36d96","Type":"ContainerStarted","Data":"1f705207120d1af7d5616ce48e832833fd53ec985c944773aa6ddba56378b09a"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.457928 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" event={"ID":"f35cc7a6-91ec-4a66-8bb1-6418c351eaf3","Type":"ContainerStarted","Data":"0acafb681f37e94f0f2385ed4a03a45b62f60b794c141e65d0d94e31de75b512"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.474456 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" event={"ID":"7436f3bf-66e4-4314-aa0b-8af645dd5bee","Type":"ContainerStarted","Data":"3c33f209131256b780b6dc73c06de9985690bdd301e90be2d856fd77b39c58fe"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.482425 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:53 crc kubenswrapper[4620]: E0129 15:03:53.485512 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:53.985490439 +0000 UTC m=+174.598318084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.523429 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" event={"ID":"e10d1772-1876-4d4f-a2f8-9cbd70d5e7c9","Type":"ContainerStarted","Data":"53a34e8611544bebb413bdd69ff71dc437b80d00e0e6d08ba8d71ffaf8144a92"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.550677 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" event={"ID":"75a509e8-6d94-4a62-b97d-ea05b87b1b74","Type":"ContainerStarted","Data":"69655660a5ac94f3a24466cc22c72800a94ffb0d28da50887f07b98c7ca638e4"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.560933 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x75k8" event={"ID":"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd","Type":"ContainerStarted","Data":"938f9ddac4ea0ca8d8a0f00b9312904510716ccb67835c0c30388d2c136eed98"} Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.586734 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:53 crc kubenswrapper[4620]: E0129 15:03:53.587105 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:54.087090542 +0000 UTC m=+174.699918187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.651490 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gt9s9" podStartSLOduration=145.65146214 podStartE2EDuration="2m25.65146214s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:53.650206571 +0000 UTC m=+174.263034216" watchObservedRunningTime="2026-01-29 15:03:53.65146214 +0000 UTC m=+174.264289785" Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.680406 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8h784" Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.695254 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.696198 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-gwr5k" podStartSLOduration=145.69617691 podStartE2EDuration="2m25.69617691s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:53.69460441 +0000 UTC m=+174.307432055" watchObservedRunningTime="2026-01-29 15:03:53.69617691 +0000 UTC m=+174.309004555" Jan 29 15:03:53 crc kubenswrapper[4620]: E0129 15:03:53.696885 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:54.196867422 +0000 UTC m=+174.809695077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.798506 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:53 crc kubenswrapper[4620]: E0129 15:03:53.799207 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:54.299166465 +0000 UTC m=+174.911994110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.838744 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rmpbk" podStartSLOduration=145.838725322 podStartE2EDuration="2m25.838725322s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:53.837294737 +0000 UTC m=+174.450122382" watchObservedRunningTime="2026-01-29 15:03:53.838725322 +0000 UTC m=+174.451552977" Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.883100 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" podStartSLOduration=145.88308323 podStartE2EDuration="2m25.88308323s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:53.8776959 +0000 UTC m=+174.490523535" watchObservedRunningTime="2026-01-29 15:03:53.88308323 +0000 UTC m=+174.495910875" Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.901718 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:53 crc kubenswrapper[4620]: E0129 15:03:53.902127 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:54.40210948 +0000 UTC m=+175.014937125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.943609 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:03:53 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:03:53 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:03:53 crc kubenswrapper[4620]: healthz check failed Jan 29 15:03:53 crc kubenswrapper[4620]: I0129 15:03:53.943662 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.010844 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:54 crc kubenswrapper[4620]: E0129 15:03:54.011154 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:54.511139416 +0000 UTC m=+175.123967061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.111680 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:54 crc kubenswrapper[4620]: E0129 15:03:54.113154 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:54.61313869 +0000 UTC m=+175.225966335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.213583 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:54 crc kubenswrapper[4620]: E0129 15:03:54.214207 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:54.714174054 +0000 UTC m=+175.327001709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.315007 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:54 crc kubenswrapper[4620]: E0129 15:03:54.315359 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:54.815337353 +0000 UTC m=+175.428164998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.416678 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:54 crc kubenswrapper[4620]: E0129 15:03:54.417099 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:54.91708335 +0000 UTC m=+175.529910995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.518153 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:54 crc kubenswrapper[4620]: E0129 15:03:54.518314 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:55.018292928 +0000 UTC m=+175.631120573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.518704 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:54 crc kubenswrapper[4620]: E0129 15:03:54.519102 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:55.019086104 +0000 UTC m=+175.631913749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.620729 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:54 crc kubenswrapper[4620]: E0129 15:03:54.620861 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:55.120838501 +0000 UTC m=+175.733666156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.621059 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:54 crc kubenswrapper[4620]: E0129 15:03:54.621407 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:55.121395578 +0000 UTC m=+175.734223233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.653330 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5lxk8" event={"ID":"0df986bd-6456-42fd-b063-301263d6e7ce","Type":"ContainerStarted","Data":"569e49dcf4034b42f22dec3400d0a116ca426a31fdc9bd7ff4e5417e6108f53b"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.676439 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" event={"ID":"b741ee63-8420-4991-9682-69a55770d9c6","Type":"ContainerStarted","Data":"a83c5d37470be4976cdad1be6d633c3106e8f95c3d752d0b8b44fc28fa449592"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.698673 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" event={"ID":"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2","Type":"ContainerStarted","Data":"e794e243577e75c38d03d6160ff386462c38f5ebd3ec7802afe69d2a35a3f23d"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.699533 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.707690 4620 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8q7cm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.707737 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" podUID="7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.718561 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" event={"ID":"d44c8c1f-8400-4291-b4c0-fa7d8a8d584a","Type":"ContainerStarted","Data":"95380f3ba626a1bd7dba65653baf9d1797dcb496c715f4aea02e005f5ffeda67"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.722991 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:54 crc kubenswrapper[4620]: E0129 15:03:54.723188 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:55.223167605 +0000 UTC m=+175.835995240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.723490 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:54 crc kubenswrapper[4620]: E0129 15:03:54.724952 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:55.224942432 +0000 UTC m=+175.837770077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.733938 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8grgt" event={"ID":"e787f3bd-fa49-4961-9556-f5f0f25fca06","Type":"ContainerStarted","Data":"a3c5be361bc8270148233a2c80c22ceb34ef8cef59095b99c99ebb023a8927f0"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.752237 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jvvlj" event={"ID":"c8a22e2b-bb08-4b1f-964b-ff2651c3ea34","Type":"ContainerStarted","Data":"f1cec016fc1b92b5824ec096e169ed45c1cafee30520ea4d3874a512329afc5b"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.760600 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" event={"ID":"ea9233e9-49ad-4164-a7c5-1d02ca87560a","Type":"ContainerStarted","Data":"e0b390d1c1f28fb0ed3df9e90a2e4359559e0e0af44082ebf8fbfb60498e7429"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.771446 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" podStartSLOduration=146.771430876 podStartE2EDuration="2m26.771430876s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:54.765781988 +0000 UTC m=+175.378609653" watchObservedRunningTime="2026-01-29 15:03:54.771430876 +0000 UTC m=+175.384258521" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.772392 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-sjk2k" podStartSLOduration=146.772383027 podStartE2EDuration="2m26.772383027s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:54.733622105 +0000 UTC m=+175.346449750" watchObservedRunningTime="2026-01-29 15:03:54.772383027 +0000 UTC m=+175.385210672" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.781474 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp" event={"ID":"e1ad943a-d557-4727-b4e9-a863aae1a47d","Type":"ContainerStarted","Data":"62e122b49a05a022585a71832e09684712ca3feb6963fb9b7238f380a95d2d82"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.785824 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" event={"ID":"cbc1d8f6-fc70-4349-b70a-44751f67425f","Type":"ContainerStarted","Data":"5cedf27a86c69ffd309b8b1069eedee06eef36b3f3ea19c25caa327ed34475ba"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.787940 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" event={"ID":"55fa1078-ff94-4efb-80ea-3bcf65192874","Type":"ContainerStarted","Data":"e3e5c6b4208dc77fcf2aa36fb79189a09d66e97478b132846acaca542bf8d584"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.789367 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8srf4" event={"ID":"9b19c06b-4261-4573-a304-a28976a1c610","Type":"ContainerStarted","Data":"0801a82831a79c5c84e9316253c51e2fde0f99135f3e612fd51c058f113fa505"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.791285 4620 generic.go:334] "Generic (PLEG): container finished" podID="8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd" containerID="938f9ddac4ea0ca8d8a0f00b9312904510716ccb67835c0c30388d2c136eed98" exitCode=0 Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.791339 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x75k8" event={"ID":"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd","Type":"ContainerDied","Data":"938f9ddac4ea0ca8d8a0f00b9312904510716ccb67835c0c30388d2c136eed98"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.800018 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" event={"ID":"737e8cb6-704f-4255-8985-ee18874f127d","Type":"ContainerStarted","Data":"ac93bcec16214bdf2241206650d90840af19ec18696cf895c1e943a0ba460fec"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.813847 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thq9h" podStartSLOduration=146.813820072 podStartE2EDuration="2m26.813820072s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:54.802818366 +0000 UTC m=+175.415646021" watchObservedRunningTime="2026-01-29 15:03:54.813820072 +0000 UTC m=+175.426647717" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.815925 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" event={"ID":"02e136c3-8346-4d92-bc4f-fbee60798447","Type":"ContainerStarted","Data":"6a788883517b73f95ac99365640131c5dd699928d50c2d9a94e5a22b7ece6718"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.824561 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:54 crc kubenswrapper[4620]: E0129 15:03:54.826025 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:55.326004527 +0000 UTC m=+175.938832172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.829998 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" event={"ID":"71877c8f-d05b-42ca-ad80-7ee3277d9558","Type":"ContainerStarted","Data":"4c9c0e29976bd753771fa473c498e15f12d161e7e7f50b9bb06a61e25b992b8a"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.832097 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.838437 4620 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tzb9r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.838489 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" podUID="71877c8f-d05b-42ca-ad80-7ee3277d9558" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.838906 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jvvlj" podStartSLOduration=6.838883132 podStartE2EDuration="6.838883132s" podCreationTimestamp="2026-01-29 15:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:54.832682897 +0000 UTC m=+175.445510552" watchObservedRunningTime="2026-01-29 15:03:54.838883132 +0000 UTC m=+175.451710777" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.839931 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" event={"ID":"75a509e8-6d94-4a62-b97d-ea05b87b1b74","Type":"ContainerStarted","Data":"8263fd323ab6dd036cc16b93f52c6a0faa301d46f1552e24d23d1f3686404bcc"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.856218 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" event={"ID":"82284043-1bad-4d00-ab41-f15f62b48fdb","Type":"ContainerStarted","Data":"57662d44db28eb79acb22ae70e3e8c4dac2f70727379807bfe4c7431fc3146cf"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.874887 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" event={"ID":"f35cc7a6-91ec-4a66-8bb1-6418c351eaf3","Type":"ContainerStarted","Data":"0b77118f0f3e8851c9881fcbf1db0cfb2e38430b703db8a862c361fbefcd785c"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.874939 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.877580 4620 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lftvg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.877631 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" podUID="f35cc7a6-91ec-4a66-8bb1-6418c351eaf3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.885802 4620 generic.go:334] "Generic (PLEG): container finished" podID="4e24af32-2b91-4772-b26e-a683ba5c3d16" containerID="dcb1b43146711441a877de4bf8bf3289d7daaef1b0063b25e530600a1f8169b4" exitCode=0 Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.892926 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-27rrp" podStartSLOduration=146.892907405 podStartE2EDuration="2m26.892907405s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:54.887225106 +0000 UTC m=+175.500052761" watchObservedRunningTime="2026-01-29 15:03:54.892907405 +0000 UTC m=+175.505735050" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.894042 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwkt" podStartSLOduration=146.89402422 podStartE2EDuration="2m26.89402422s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:54.85183341 +0000 UTC m=+175.464661045" watchObservedRunningTime="2026-01-29 15:03:54.89402422 +0000 UTC m=+175.506851885" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.926750 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:54 crc kubenswrapper[4620]: E0129 15:03:54.928571 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:55.428556918 +0000 UTC m=+176.041384563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.967315 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" event={"ID":"4e24af32-2b91-4772-b26e-a683ba5c3d16","Type":"ContainerDied","Data":"dcb1b43146711441a877de4bf8bf3289d7daaef1b0063b25e530600a1f8169b4"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.967357 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" event={"ID":"c5b2667b-8dba-45fe-8bc5-ca0289d36d96","Type":"ContainerStarted","Data":"c3d45676c6d3ffcff3025e086b2a86fc02a78057a4dd8cd3fd0689de4dc6238a"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.971019 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:03:54 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:03:54 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:03:54 crc kubenswrapper[4620]: healthz check failed Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.971081 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.971201 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bgtck" event={"ID":"cb416312-125b-4c11-9ecd-07a06d5e6c02","Type":"ContainerStarted","Data":"10587017c7323ec6990895d5dad7fe1d61929d3644f65bb19f59bacf9cef5a8e"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.974126 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" event={"ID":"e9295f40-e604-4f7f-9f79-f44f1a5f9bc3","Type":"ContainerStarted","Data":"3f89e9e5b911290a95b1771daef968d95255b58990fba35c527f725240dc0e70"} Jan 29 15:03:54 crc kubenswrapper[4620]: I0129 15:03:54.989944 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8srf4" podStartSLOduration=6.989927292 podStartE2EDuration="6.989927292s" podCreationTimestamp="2026-01-29 15:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:54.965174062 +0000 UTC m=+175.578001707" watchObservedRunningTime="2026-01-29 15:03:54.989927292 +0000 UTC m=+175.602754937" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.004964 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" event={"ID":"02381b1b-463c-4532-a690-deee86ffc674","Type":"ContainerStarted","Data":"e8f77b0c51130d15d2a973f8ecb0f743f4d8fce198256d0e7dc331029a5d0ad0"} Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.019504 4620 generic.go:334] "Generic (PLEG): container finished" podID="f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b" containerID="2a1dd4bee19f06f8952a5f09eacf3cb5271e43b22b442efae03e4bfb5517e370" exitCode=0 Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.019590 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" event={"ID":"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b","Type":"ContainerDied","Data":"2a1dd4bee19f06f8952a5f09eacf3cb5271e43b22b442efae03e4bfb5517e370"} Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.031566 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:55 crc kubenswrapper[4620]: E0129 15:03:55.032016 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:55.531999608 +0000 UTC m=+176.144827253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.035768 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" podStartSLOduration=147.035740957 podStartE2EDuration="2m27.035740957s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:55.034420664 +0000 UTC m=+175.647248309" watchObservedRunningTime="2026-01-29 15:03:55.035740957 +0000 UTC m=+175.648568602" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.039538 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w8bnd" podStartSLOduration=147.039528725 podStartE2EDuration="2m27.039528725s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:54.992130102 +0000 UTC m=+175.604957747" watchObservedRunningTime="2026-01-29 15:03:55.039528725 +0000 UTC m=+175.652356370" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.072018 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" event={"ID":"713c81b7-8d56-4d58-bd4e-f827de0ca17b","Type":"ContainerStarted","Data":"1479c4d184a9b3b388c649d6224706a5ccddd29bf2155081add9fd2fe06223eb"} Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.075607 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z8v2d" podStartSLOduration=147.075590342 podStartE2EDuration="2m27.075590342s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:55.071325678 +0000 UTC m=+175.684153323" watchObservedRunningTime="2026-01-29 15:03:55.075590342 +0000 UTC m=+175.688417987" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.095601 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jfjwn" podStartSLOduration=147.095583762 podStartE2EDuration="2m27.095583762s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:55.094179398 +0000 UTC m=+175.707007043" watchObservedRunningTime="2026-01-29 15:03:55.095583762 +0000 UTC m=+175.708411407" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.103428 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" event={"ID":"0b2a43de-fb0b-4468-af29-c7436e08fb13","Type":"ContainerStarted","Data":"8e1f1292ba84d73cf216b6de396c0d33917251517163dd7e1fb843f9d8c75ab4"} Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.116969 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" event={"ID":"7436f3bf-66e4-4314-aa0b-8af645dd5bee","Type":"ContainerStarted","Data":"b54ea343b2ffad95c7a0f0fb31efb8d3f4d9e71ef55fe8c545e2c3932946b840"} Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.118210 4620 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwr5k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.118249 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwr5k" podUID="8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.118674 4620 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2j567 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.118719 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" podUID="f9d96dcf-b094-485c-8636-401ccc71e918" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.118962 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.134283 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:55 crc kubenswrapper[4620]: E0129 15:03:55.135472 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:55.635462329 +0000 UTC m=+176.248289974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.137237 4620 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zsh8m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.137301 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.175867 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" podStartSLOduration=147.175846772 podStartE2EDuration="2m27.175846772s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:55.17516731 +0000 UTC m=+175.787994965" watchObservedRunningTime="2026-01-29 15:03:55.175846772 +0000 UTC m=+175.788674407" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.206588 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5qflb" podStartSLOduration=147.20657134 podStartE2EDuration="2m27.20657134s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:55.205954491 +0000 UTC m=+175.818782136" watchObservedRunningTime="2026-01-29 15:03:55.20657134 +0000 UTC m=+175.819398985" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.231353 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sqrtn" podStartSLOduration=147.231332451 podStartE2EDuration="2m27.231332451s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:55.226808687 +0000 UTC m=+175.839636322" watchObservedRunningTime="2026-01-29 15:03:55.231332451 +0000 UTC m=+175.844160096" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.235835 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:55 crc kubenswrapper[4620]: E0129 15:03:55.239392 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:55.739373114 +0000 UTC m=+176.352200839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.248708 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" podStartSLOduration=148.248690078 podStartE2EDuration="2m28.248690078s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:55.246396555 +0000 UTC m=+175.859224210" watchObservedRunningTime="2026-01-29 15:03:55.248690078 +0000 UTC m=+175.861517723" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.285316 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" podStartSLOduration=147.285300752 podStartE2EDuration="2m27.285300752s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:55.284192986 +0000 UTC m=+175.897020641" watchObservedRunningTime="2026-01-29 15:03:55.285300752 +0000 UTC m=+175.898128397" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.337993 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:55 crc kubenswrapper[4620]: E0129 15:03:55.338367 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:55.838354064 +0000 UTC m=+176.451181709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.439378 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:55 crc kubenswrapper[4620]: E0129 15:03:55.439808 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:55.93978942 +0000 UTC m=+176.552617065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.541185 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:55 crc kubenswrapper[4620]: E0129 15:03:55.541463 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.041452104 +0000 UTC m=+176.654279749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.642698 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:55 crc kubenswrapper[4620]: E0129 15:03:55.642868 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.142842759 +0000 UTC m=+176.755670404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.643311 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:55 crc kubenswrapper[4620]: E0129 15:03:55.643631 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.143619214 +0000 UTC m=+176.756446849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.743980 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:55 crc kubenswrapper[4620]: E0129 15:03:55.744090 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.24406768 +0000 UTC m=+176.856895335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.744220 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:55 crc kubenswrapper[4620]: E0129 15:03:55.744597 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.244586155 +0000 UTC m=+176.857413800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.845462 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:55 crc kubenswrapper[4620]: E0129 15:03:55.845802 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.345786945 +0000 UTC m=+176.958614590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.939567 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:03:55 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:03:55 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:03:55 crc kubenswrapper[4620]: healthz check failed Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.940169 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:03:55 crc kubenswrapper[4620]: I0129 15:03:55.947619 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:55 crc kubenswrapper[4620]: E0129 15:03:55.947964 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.447950445 +0000 UTC m=+177.060778090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.048312 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.048418 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.54838208 +0000 UTC m=+177.161209725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.048494 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.048883 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.548872745 +0000 UTC m=+177.161700390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.136054 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" event={"ID":"4e24af32-2b91-4772-b26e-a683ba5c3d16","Type":"ContainerStarted","Data":"747e7be64cfde63528df1fbdd19b4de898b4132e38d4f9ed5404d35f1b24a295"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.138803 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" event={"ID":"d44c8c1f-8400-4291-b4c0-fa7d8a8d584a","Type":"ContainerStarted","Data":"3a9766df0c4c0aebded849fed098b52ce95ede884fdb25b9057e0cb534927381"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.140562 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ncw89" event={"ID":"1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc","Type":"ContainerStarted","Data":"cfcb0acc970bf7c45914b3f4727262f9b44c7cac899307f8f48d908d9876c335"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.142344 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" event={"ID":"f7d68aad-b9c8-4c26-8d8c-3ede66dd2b2b","Type":"ContainerStarted","Data":"f2679872be3d7042bc82d0c26a672b6a31e7f8f912e7d16b0783a8aa3ab24483"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.147590 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x75k8" event={"ID":"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd","Type":"ContainerStarted","Data":"74759304dc3b1e92e3a02ba7b451fd30ee8eb647871e8212ee4e877fd79b5335"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.148931 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" event={"ID":"02381b1b-463c-4532-a690-deee86ffc674","Type":"ContainerStarted","Data":"28810f53f1daf68487fb850df695bca1b2e26d34bb052ad7a48a459768e1f1b9"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.149506 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.149827 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.649812207 +0000 UTC m=+177.262639852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.149979 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.150303 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.650292272 +0000 UTC m=+177.263119917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.153706 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" event={"ID":"73c298d0-47e0-4026-bb37-871f2357393d","Type":"ContainerStarted","Data":"dd214729453a08e20807a3d96e3c226fff4d785651869658ec63205e77ad0a10"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.154030 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.155666 4620 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5h6r9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.155691 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" event={"ID":"bfd69ce1-51c3-44b0-81e5-576a633bff91","Type":"ContainerStarted","Data":"5058e1706f57d5d91a21a9ab8ebe9faf354a7394f3ee1a038811f1de63563faa"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.155714 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" podUID="73c298d0-47e0-4026-bb37-871f2357393d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.159014 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" event={"ID":"cbc1d8f6-fc70-4349-b70a-44751f67425f","Type":"ContainerStarted","Data":"d85133822d03f79d25ac7d4ec593af8894b7bed70268612ab8d36630cee82020"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.160972 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8grgt" event={"ID":"e787f3bd-fa49-4961-9556-f5f0f25fca06","Type":"ContainerStarted","Data":"d0fcfd5b00785e8841538c58cf93d6d4b077cc7832932af1bb90b9e637688d39"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.162890 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bgtck" event={"ID":"cb416312-125b-4c11-9ecd-07a06d5e6c02","Type":"ContainerStarted","Data":"8f48629281c462dab769acbea155abd90001e0e77b4b4e4ad9c6802acbe6f097"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.164263 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" podStartSLOduration=148.164244322 podStartE2EDuration="2m28.164244322s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:56.162449454 +0000 UTC m=+176.775277099" watchObservedRunningTime="2026-01-29 15:03:56.164244322 +0000 UTC m=+176.777071967" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.169247 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" event={"ID":"c5b2667b-8dba-45fe-8bc5-ca0289d36d96","Type":"ContainerStarted","Data":"3e3f8250209c5d83700bc15764e877ae0f58845153ae8b9bb37061b69af3f94b"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.171051 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5lxk8" event={"ID":"0df986bd-6456-42fd-b063-301263d6e7ce","Type":"ContainerStarted","Data":"b747c3c0db2275d0d82cb7d794b8b74d505c8164358bec82bf86ae93169283d7"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.173407 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" event={"ID":"6453eca8-faa2-4fae-821a-246b72626d90","Type":"ContainerStarted","Data":"f43e1a7d95a40aeb1853693111da093fa2d77d17fb962139f2fc81817f541b55"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.175386 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" event={"ID":"713c81b7-8d56-4d58-bd4e-f827de0ca17b","Type":"ContainerStarted","Data":"502515f1acee8bd32ea86f0d38418ac91f76362106c55fc1f4f18e1e3902ff4f"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.189394 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" event={"ID":"55fa1078-ff94-4efb-80ea-3bcf65192874","Type":"ContainerStarted","Data":"1b1397e129d6223ee9c8579f4ba991af8e244c5e86520fda01bfc090b33845a0"} Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.190116 4620 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zsh8m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.190195 4620 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2j567 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.190307 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" podUID="f9d96dcf-b094-485c-8636-401ccc71e918" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.190252 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.190622 4620 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8q7cm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.190648 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" podUID="7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.191992 4620 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tzb9r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.192044 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" podUID="71877c8f-d05b-42ca-ad80-7ee3277d9558" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.195043 4620 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lftvg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.195099 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" podUID="f35cc7a6-91ec-4a66-8bb1-6418c351eaf3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.214731 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgs4s" podStartSLOduration=149.214715722 podStartE2EDuration="2m29.214715722s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:56.179387528 +0000 UTC m=+176.792215193" watchObservedRunningTime="2026-01-29 15:03:56.214715722 +0000 UTC m=+176.827543367" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.239351 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-swx9b" podStartSLOduration=148.239336248 podStartE2EDuration="2m28.239336248s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:56.215956611 +0000 UTC m=+176.828784276" watchObservedRunningTime="2026-01-29 15:03:56.239336248 +0000 UTC m=+176.852163893" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.240684 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" podStartSLOduration=148.2406783 podStartE2EDuration="2m28.2406783s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:56.236575541 +0000 UTC m=+176.849403186" watchObservedRunningTime="2026-01-29 15:03:56.2406783 +0000 UTC m=+176.853505945" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.250664 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.251115 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.751087078 +0000 UTC m=+177.363914723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.258872 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.272803 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.772788202 +0000 UTC m=+177.385615847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.285867 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xbhzj" podStartSLOduration=148.285845723 podStartE2EDuration="2m28.285845723s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:56.262346953 +0000 UTC m=+176.875174598" watchObservedRunningTime="2026-01-29 15:03:56.285845723 +0000 UTC m=+176.898673368" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.292483 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-gdsbs" podStartSLOduration=148.292460362 podStartE2EDuration="2m28.292460362s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:56.285065428 +0000 UTC m=+176.897893073" watchObservedRunningTime="2026-01-29 15:03:56.292460362 +0000 UTC m=+176.905288017" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.366815 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.367152 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.867093844 +0000 UTC m=+177.479921499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.367364 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.367914 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.867892569 +0000 UTC m=+177.480720214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.470326 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.470510 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.970485082 +0000 UTC m=+177.583312727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.470750 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.471019 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:56.971012659 +0000 UTC m=+177.583840294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.571741 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.571929 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.071895328 +0000 UTC m=+177.684722993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.572261 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.572550 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.072536939 +0000 UTC m=+177.685364584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.673066 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.673252 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.173228442 +0000 UTC m=+177.786056087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.673535 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.673945 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.173927993 +0000 UTC m=+177.786755638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.774595 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.774823 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.274799162 +0000 UTC m=+177.887626807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.775090 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.775420 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.275411562 +0000 UTC m=+177.888239207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.876696 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.877053 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.377021664 +0000 UTC m=+177.989849359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.877269 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.877572 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.377558212 +0000 UTC m=+177.990385857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.937945 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:03:56 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:03:56 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:03:56 crc kubenswrapper[4620]: healthz check failed Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.938018 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.978748 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.978899 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.478872885 +0000 UTC m=+178.091700530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:56 crc kubenswrapper[4620]: I0129 15:03:56.978989 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:56 crc kubenswrapper[4620]: E0129 15:03:56.979321 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.479313058 +0000 UTC m=+178.092140703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.079605 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.079812 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.579786675 +0000 UTC m=+178.192614320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.079909 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.080227 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.580212238 +0000 UTC m=+178.193039883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.181548 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.181801 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.681769318 +0000 UTC m=+178.294596963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.181990 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.182373 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.682362937 +0000 UTC m=+178.295190582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.195827 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x75k8" event={"ID":"8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd","Type":"ContainerStarted","Data":"9714690d748f9aa12eee05d4e5bbc1ba850cb6e11bd49ceba9c2738a1e57a7d0"} Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.198645 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-ncw89" event={"ID":"1bf2bada-fd6c-4ec0-9748-c46e8b1ae2dc","Type":"ContainerStarted","Data":"86451ef726299fabf64856b9d31471970b4a6121d9dc0307626fe092b09efeb6"} Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.201235 4620 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tzb9r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.201239 4620 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5h6r9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.201290 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" podUID="71877c8f-d05b-42ca-ad80-7ee3277d9558" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.201330 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" podUID="73c298d0-47e0-4026-bb37-871f2357393d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.283642 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.283869 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.783844536 +0000 UTC m=+178.396672181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.284305 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.289276 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.789258246 +0000 UTC m=+178.402085941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.293219 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5lxk8" podStartSLOduration=149.29320114 podStartE2EDuration="2m29.29320114s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:56.310039815 +0000 UTC m=+176.922867460" watchObservedRunningTime="2026-01-29 15:03:57.29320114 +0000 UTC m=+177.906028805" Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.293877 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.295468 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-x75k8" podStartSLOduration=149.295461801 podStartE2EDuration="2m29.295461801s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:57.294042007 +0000 UTC m=+177.906869652" watchObservedRunningTime="2026-01-29 15:03:57.295461801 +0000 UTC m=+177.908289446" Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.360254 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c8czx" podStartSLOduration=149.360237803 podStartE2EDuration="2m29.360237803s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:57.356055721 +0000 UTC m=+177.968883386" watchObservedRunningTime="2026-01-29 15:03:57.360237803 +0000 UTC m=+177.973065448" Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.386236 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.386573 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.886557903 +0000 UTC m=+178.499385548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.409969 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bgtck" podStartSLOduration=9.40994942 podStartE2EDuration="9.40994942s" podCreationTimestamp="2026-01-29 15:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:57.408514904 +0000 UTC m=+178.021342559" watchObservedRunningTime="2026-01-29 15:03:57.40994942 +0000 UTC m=+178.022777055" Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.465448 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" podStartSLOduration=150.465430988 podStartE2EDuration="2m30.465430988s" podCreationTimestamp="2026-01-29 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:57.463801996 +0000 UTC m=+178.076629641" watchObservedRunningTime="2026-01-29 15:03:57.465430988 +0000 UTC m=+178.078258633" Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.487658 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.487986 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:57.987973359 +0000 UTC m=+178.600801004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.540564 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-ncw89" podStartSLOduration=149.540544725 podStartE2EDuration="2m29.540544725s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:57.538634846 +0000 UTC m=+178.151462511" watchObservedRunningTime="2026-01-29 15:03:57.540544725 +0000 UTC m=+178.153372380" Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.588504 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.588643 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.088619151 +0000 UTC m=+178.701446796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.588746 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.589200 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.089183429 +0000 UTC m=+178.702011084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.603944 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8grgt" podStartSLOduration=149.603923692 podStartE2EDuration="2m29.603923692s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:57.581187556 +0000 UTC m=+178.194015221" watchObservedRunningTime="2026-01-29 15:03:57.603923692 +0000 UTC m=+178.216751347" Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.646630 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4cnjz" podStartSLOduration=149.646606058 podStartE2EDuration="2m29.646606058s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:57.606524654 +0000 UTC m=+178.219352319" watchObservedRunningTime="2026-01-29 15:03:57.646606058 +0000 UTC m=+178.259433713" Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.689699 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.689959 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.189939074 +0000 UTC m=+178.802766719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.690340 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.690698 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.190686707 +0000 UTC m=+178.803514352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.791252 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.791469 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.291454233 +0000 UTC m=+178.904281878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.892575 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.892954 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.392939721 +0000 UTC m=+179.005767366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.944512 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:03:57 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:03:57 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:03:57 crc kubenswrapper[4620]: healthz check failed Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.944956 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.993908 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.994064 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.494029917 +0000 UTC m=+179.106857562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:57 crc kubenswrapper[4620]: I0129 15:03:57.994534 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:57 crc kubenswrapper[4620]: E0129 15:03:57.994979 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.494965677 +0000 UTC m=+179.107793322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.095976 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.096192 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.596174466 +0000 UTC m=+179.209002121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.096569 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.096944 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.596920319 +0000 UTC m=+179.209747954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.198130 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.198387 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.698364237 +0000 UTC m=+179.311191882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.198657 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.199085 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.699069399 +0000 UTC m=+179.311897044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.205158 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" event={"ID":"11cea8f7-7594-45c6-8f0c-77caad6bdbc5","Type":"ContainerStarted","Data":"822e8211b60b6402c498453cfabbb4d5d75e027ec44cb22b190d11bfdf15c7f9"} Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.300315 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.300519 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.800487985 +0000 UTC m=+179.413315630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.300615 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.301578 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.801561059 +0000 UTC m=+179.414388704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.401773 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.402013 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.901987824 +0000 UTC m=+179.514815469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.402147 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.402624 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:58.902603303 +0000 UTC m=+179.515430948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.503254 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.503464 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.003431481 +0000 UTC m=+179.616259126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.503670 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.503993 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.003979928 +0000 UTC m=+179.616807573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.604673 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.604841 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.104819116 +0000 UTC m=+179.717646761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.604918 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.605229 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.105222129 +0000 UTC m=+179.718049774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.706089 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.706368 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.206351316 +0000 UTC m=+179.819178961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.706542 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.706803 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.20679523 +0000 UTC m=+179.819622875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.713097 4620 csr.go:261] certificate signing request csr-fqzpm is approved, waiting to be issued Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.737060 4620 csr.go:257] certificate signing request csr-fqzpm is issued Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.807332 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.807518 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.307491943 +0000 UTC m=+179.920319588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.909243 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:58 crc kubenswrapper[4620]: E0129 15:03:58.909665 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.409647193 +0000 UTC m=+180.022474838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.942877 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:03:58 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:03:58 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:03:58 crc kubenswrapper[4620]: healthz check failed Jan 29 15:03:58 crc kubenswrapper[4620]: I0129 15:03:58.942982 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.010237 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.010624 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.510598844 +0000 UTC m=+180.123426489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.111198 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.111591 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.611576267 +0000 UTC m=+180.224403912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.123128 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.190424 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" podStartSLOduration=151.190405761 podStartE2EDuration="2m31.190405761s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:03:57.648802947 +0000 UTC m=+178.261630612" watchObservedRunningTime="2026-01-29 15:03:59.190405761 +0000 UTC m=+179.803233406" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.193702 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.205152 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.208309 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.210088 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.212408 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.213365 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.713339933 +0000 UTC m=+180.326167578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.277466 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.314180 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.314240 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4147aba-e9a4-47c8-8ceb-1a743b34c328-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f4147aba-e9a4-47c8-8ceb-1a743b34c328\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.314302 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4147aba-e9a4-47c8-8ceb-1a743b34c328-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f4147aba-e9a4-47c8-8ceb-1a743b34c328\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.314583 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.814571924 +0000 UTC m=+180.427399569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.415160 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.415460 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.915434002 +0000 UTC m=+180.528261647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.415875 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4147aba-e9a4-47c8-8ceb-1a743b34c328-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f4147aba-e9a4-47c8-8ceb-1a743b34c328\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.416065 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.416211 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4147aba-e9a4-47c8-8ceb-1a743b34c328-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f4147aba-e9a4-47c8-8ceb-1a743b34c328\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.416595 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4147aba-e9a4-47c8-8ceb-1a743b34c328-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f4147aba-e9a4-47c8-8ceb-1a743b34c328\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.416879 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:03:59.916870968 +0000 UTC m=+180.529698613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.508024 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4147aba-e9a4-47c8-8ceb-1a743b34c328-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f4147aba-e9a4-47c8-8ceb-1a743b34c328\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.516917 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.517240 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.01721485 +0000 UTC m=+180.630042495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.527072 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.618990 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.619482 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.119470363 +0000 UTC m=+180.732298008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.648692 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.649947 4620 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qlbvc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.650067 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" podUID="4e24af32-2b91-4772-b26e-a683ba5c3d16" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.649986 4620 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qlbvc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.650567 4620 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qlbvc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.650675 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" podUID="4e24af32-2b91-4772-b26e-a683ba5c3d16" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.650769 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" podUID="4e24af32-2b91-4772-b26e-a683ba5c3d16" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.721144 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.721407 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.221380165 +0000 UTC m=+180.834207810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.721794 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.722130 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.222117558 +0000 UTC m=+180.834945203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.738583 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 14:58:58 +0000 UTC, rotation deadline is 2026-12-17 16:04:44.564305513 +0000 UTC Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.738893 4620 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7729h0m44.825416576s for next certificate rotation Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.822942 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.823236 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.323210844 +0000 UTC m=+180.936038489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.823427 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.823831 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.323818253 +0000 UTC m=+180.936645898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.924124 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.924330 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.424292129 +0000 UTC m=+181.037119774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.924636 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:03:59 crc kubenswrapper[4620]: E0129 15:03:59.924993 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.424984791 +0000 UTC m=+181.037812436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.938801 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:03:59 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:03:59 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:03:59 crc kubenswrapper[4620]: healthz check failed Jan 29 15:03:59 crc kubenswrapper[4620]: I0129 15:03:59.939146 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.026055 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:00 crc kubenswrapper[4620]: E0129 15:04:00.026668 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.526645415 +0000 UTC m=+181.139473060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.128358 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:00 crc kubenswrapper[4620]: E0129 15:04:00.128682 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.62867011 +0000 UTC m=+181.241497755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.164067 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-bgtck" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.229452 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:00 crc kubenswrapper[4620]: E0129 15:04:00.230417 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.730401816 +0000 UTC m=+181.343229461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.332109 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:00 crc kubenswrapper[4620]: E0129 15:04:00.332506 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.832492384 +0000 UTC m=+181.445320029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.400138 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.433325 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:00 crc kubenswrapper[4620]: E0129 15:04:00.433471 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.933445106 +0000 UTC m=+181.546272751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.433580 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:00 crc kubenswrapper[4620]: E0129 15:04:00.433873 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:00.933861788 +0000 UTC m=+181.546689433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.524102 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.524143 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.524777 4620 patch_prober.go:28] interesting pod/console-f9d7485db-z57gf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.524817 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-z57gf" podUID="aa662f18-6ab4-43b8-8e65-8de41043b74d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.534232 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:00 crc kubenswrapper[4620]: E0129 15:04:00.534618 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:01.034604313 +0000 UTC m=+181.647431958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.549836 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.551154 4620 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwr5k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.551188 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwr5k" podUID="8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.551239 4620 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwr5k container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.551286 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gwr5k" podUID="8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.572000 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.572368 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.636818 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:00 crc kubenswrapper[4620]: E0129 15:04:00.637941 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:01.13792243 +0000 UTC m=+181.750750175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.657877 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.674009 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.674047 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.674961 4620 patch_prober.go:28] interesting pod/apiserver-76f77b778f-x75k8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.675020 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-x75k8" podUID="8cec96d0-b29c-4fc4-a8b6-a8e5e8db65cd" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.707985 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lg725"] Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.709191 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lg725" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.714551 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.733160 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lg725"] Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.737933 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:00 crc kubenswrapper[4620]: E0129 15:04:00.738103 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:01.238078496 +0000 UTC m=+181.850906141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.738216 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:00 crc kubenswrapper[4620]: E0129 15:04:00.739383 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:01.239366886 +0000 UTC m=+181.852194591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.840656 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:00 crc kubenswrapper[4620]: E0129 15:04:00.840909 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:01.340878316 +0000 UTC m=+181.953705961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.841475 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aaa7ad3-f4ef-4a15-993d-43166028f71b-catalog-content\") pod \"certified-operators-lg725\" (UID: \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\") " pod="openshift-marketplace/certified-operators-lg725" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.841578 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aaa7ad3-f4ef-4a15-993d-43166028f71b-utilities\") pod \"certified-operators-lg725\" (UID: \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\") " pod="openshift-marketplace/certified-operators-lg725" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.841684 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzm4c\" (UniqueName: \"kubernetes.io/projected/2aaa7ad3-f4ef-4a15-993d-43166028f71b-kube-api-access-mzm4c\") pod \"certified-operators-lg725\" (UID: \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\") " pod="openshift-marketplace/certified-operators-lg725" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.841808 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:00 crc kubenswrapper[4620]: E0129 15:04:00.842120 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:01.342112005 +0000 UTC m=+181.954939640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.905718 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jn68n"] Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.906625 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn68n" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.917382 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.936874 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.939394 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jn68n"] Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.940467 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:00 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:04:00 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:00 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.940513 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.943058 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.943201 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67739a3a-d009-4685-a79a-aaa81f5b2daf-utilities\") pod \"community-operators-jn68n\" (UID: \"67739a3a-d009-4685-a79a-aaa81f5b2daf\") " pod="openshift-marketplace/community-operators-jn68n" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.943241 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67739a3a-d009-4685-a79a-aaa81f5b2daf-catalog-content\") pod \"community-operators-jn68n\" (UID: \"67739a3a-d009-4685-a79a-aaa81f5b2daf\") " pod="openshift-marketplace/community-operators-jn68n" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.943270 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k7hs\" (UniqueName: \"kubernetes.io/projected/67739a3a-d009-4685-a79a-aaa81f5b2daf-kube-api-access-5k7hs\") pod \"community-operators-jn68n\" (UID: \"67739a3a-d009-4685-a79a-aaa81f5b2daf\") " pod="openshift-marketplace/community-operators-jn68n" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.943309 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aaa7ad3-f4ef-4a15-993d-43166028f71b-catalog-content\") pod \"certified-operators-lg725\" (UID: \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\") " pod="openshift-marketplace/certified-operators-lg725" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.943340 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aaa7ad3-f4ef-4a15-993d-43166028f71b-utilities\") pod \"certified-operators-lg725\" (UID: \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\") " pod="openshift-marketplace/certified-operators-lg725" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.943368 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzm4c\" (UniqueName: \"kubernetes.io/projected/2aaa7ad3-f4ef-4a15-993d-43166028f71b-kube-api-access-mzm4c\") pod \"certified-operators-lg725\" (UID: \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\") " pod="openshift-marketplace/certified-operators-lg725" Jan 29 15:04:00 crc kubenswrapper[4620]: E0129 15:04:00.943698 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:01.443682955 +0000 UTC m=+182.056510600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.944155 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aaa7ad3-f4ef-4a15-993d-43166028f71b-catalog-content\") pod \"certified-operators-lg725\" (UID: \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\") " pod="openshift-marketplace/certified-operators-lg725" Jan 29 15:04:00 crc kubenswrapper[4620]: I0129 15:04:00.944390 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aaa7ad3-f4ef-4a15-993d-43166028f71b-utilities\") pod \"certified-operators-lg725\" (UID: \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\") " pod="openshift-marketplace/certified-operators-lg725" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.010086 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzm4c\" (UniqueName: \"kubernetes.io/projected/2aaa7ad3-f4ef-4a15-993d-43166028f71b-kube-api-access-mzm4c\") pod \"certified-operators-lg725\" (UID: \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\") " pod="openshift-marketplace/certified-operators-lg725" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.039320 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.039555 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lg725" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.045677 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67739a3a-d009-4685-a79a-aaa81f5b2daf-catalog-content\") pod \"community-operators-jn68n\" (UID: \"67739a3a-d009-4685-a79a-aaa81f5b2daf\") " pod="openshift-marketplace/community-operators-jn68n" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.045768 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k7hs\" (UniqueName: \"kubernetes.io/projected/67739a3a-d009-4685-a79a-aaa81f5b2daf-kube-api-access-5k7hs\") pod \"community-operators-jn68n\" (UID: \"67739a3a-d009-4685-a79a-aaa81f5b2daf\") " pod="openshift-marketplace/community-operators-jn68n" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.046371 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67739a3a-d009-4685-a79a-aaa81f5b2daf-catalog-content\") pod \"community-operators-jn68n\" (UID: \"67739a3a-d009-4685-a79a-aaa81f5b2daf\") " pod="openshift-marketplace/community-operators-jn68n" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.046642 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.046709 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67739a3a-d009-4685-a79a-aaa81f5b2daf-utilities\") pod \"community-operators-jn68n\" (UID: \"67739a3a-d009-4685-a79a-aaa81f5b2daf\") " pod="openshift-marketplace/community-operators-jn68n" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.048312 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67739a3a-d009-4685-a79a-aaa81f5b2daf-utilities\") pod \"community-operators-jn68n\" (UID: \"67739a3a-d009-4685-a79a-aaa81f5b2daf\") " pod="openshift-marketplace/community-operators-jn68n" Jan 29 15:04:01 crc kubenswrapper[4620]: E0129 15:04:01.049227 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:01.549215322 +0000 UTC m=+182.162042967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.079696 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.087504 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k7hs\" (UniqueName: \"kubernetes.io/projected/67739a3a-d009-4685-a79a-aaa81f5b2daf-kube-api-access-5k7hs\") pod \"community-operators-jn68n\" (UID: \"67739a3a-d009-4685-a79a-aaa81f5b2daf\") " pod="openshift-marketplace/community-operators-jn68n" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.108445 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mzcfs"] Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.119042 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tzb9r" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.140631 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzcfs" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.141356 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5h6r9" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.149466 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:01 crc kubenswrapper[4620]: E0129 15:04:01.149716 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:01.649701579 +0000 UTC m=+182.262529224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.149958 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-utilities\") pod \"certified-operators-mzcfs\" (UID: \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\") " pod="openshift-marketplace/certified-operators-mzcfs" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.150073 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.150120 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-catalog-content\") pod \"certified-operators-mzcfs\" (UID: \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\") " pod="openshift-marketplace/certified-operators-mzcfs" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.150658 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw6hg\" (UniqueName: \"kubernetes.io/projected/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-kube-api-access-nw6hg\") pod \"certified-operators-mzcfs\" (UID: \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\") " pod="openshift-marketplace/certified-operators-mzcfs" Jan 29 15:04:01 crc kubenswrapper[4620]: E0129 15:04:01.150912 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:01.650899286 +0000 UTC m=+182.263726941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.169310 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.170013 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.180094 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.180517 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.180695 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.199129 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzcfs"] Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.210590 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.257584 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn68n" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.268275 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lftvg" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.268428 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.268696 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw6hg\" (UniqueName: \"kubernetes.io/projected/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-kube-api-access-nw6hg\") pod \"certified-operators-mzcfs\" (UID: \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\") " pod="openshift-marketplace/certified-operators-mzcfs" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.268776 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-utilities\") pod \"certified-operators-mzcfs\" (UID: \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\") " pod="openshift-marketplace/certified-operators-mzcfs" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.268849 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-catalog-content\") pod \"certified-operators-mzcfs\" (UID: \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\") " pod="openshift-marketplace/certified-operators-mzcfs" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.268881 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c75101-bbca-4251-9357-a834826c89bd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"03c75101-bbca-4251-9357-a834826c89bd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.268927 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c75101-bbca-4251-9357-a834826c89bd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"03c75101-bbca-4251-9357-a834826c89bd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.269730 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-utilities\") pod \"certified-operators-mzcfs\" (UID: \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\") " pod="openshift-marketplace/certified-operators-mzcfs" Jan 29 15:04:01 crc kubenswrapper[4620]: E0129 15:04:01.269823 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:01.769808094 +0000 UTC m=+182.382635739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.270655 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-catalog-content\") pod \"certified-operators-mzcfs\" (UID: \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\") " pod="openshift-marketplace/certified-operators-mzcfs" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.310019 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f4147aba-e9a4-47c8-8ceb-1a743b34c328","Type":"ContainerStarted","Data":"185f006d72248f90f06e5da8b8f04fe7eb9b134d4c8667dfc6739b333b14e15c"} Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.325089 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-stj67" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.343830 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l8bmh"] Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.344932 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8bmh" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.365945 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw6hg\" (UniqueName: \"kubernetes.io/projected/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-kube-api-access-nw6hg\") pod \"certified-operators-mzcfs\" (UID: \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\") " pod="openshift-marketplace/certified-operators-mzcfs" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.371634 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:01 crc kubenswrapper[4620]: E0129 15:04:01.372667 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:01.872656385 +0000 UTC m=+182.485484030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.372701 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c75101-bbca-4251-9357-a834826c89bd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"03c75101-bbca-4251-9357-a834826c89bd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.372770 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c75101-bbca-4251-9357-a834826c89bd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"03c75101-bbca-4251-9357-a834826c89bd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.374272 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c75101-bbca-4251-9357-a834826c89bd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"03c75101-bbca-4251-9357-a834826c89bd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.446287 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l8bmh"] Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.476701 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.476929 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-catalog-content\") pod \"community-operators-l8bmh\" (UID: \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\") " pod="openshift-marketplace/community-operators-l8bmh" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.476988 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6ws5\" (UniqueName: \"kubernetes.io/projected/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-kube-api-access-w6ws5\") pod \"community-operators-l8bmh\" (UID: \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\") " pod="openshift-marketplace/community-operators-l8bmh" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.477044 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-utilities\") pod \"community-operators-l8bmh\" (UID: \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\") " pod="openshift-marketplace/community-operators-l8bmh" Jan 29 15:04:01 crc kubenswrapper[4620]: E0129 15:04:01.477162 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:01.977147708 +0000 UTC m=+182.589975353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.484082 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzcfs" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.490399 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c75101-bbca-4251-9357-a834826c89bd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"03c75101-bbca-4251-9357-a834826c89bd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.500414 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.582368 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-catalog-content\") pod \"community-operators-l8bmh\" (UID: \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\") " pod="openshift-marketplace/community-operators-l8bmh" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.582418 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.582438 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6ws5\" (UniqueName: \"kubernetes.io/projected/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-kube-api-access-w6ws5\") pod \"community-operators-l8bmh\" (UID: \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\") " pod="openshift-marketplace/community-operators-l8bmh" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.582478 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-utilities\") pod \"community-operators-l8bmh\" (UID: \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\") " pod="openshift-marketplace/community-operators-l8bmh" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.582878 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-utilities\") pod \"community-operators-l8bmh\" (UID: \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\") " pod="openshift-marketplace/community-operators-l8bmh" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.583087 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-catalog-content\") pod \"community-operators-l8bmh\" (UID: \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\") " pod="openshift-marketplace/community-operators-l8bmh" Jan 29 15:04:01 crc kubenswrapper[4620]: E0129 15:04:01.583297 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:02.083287533 +0000 UTC m=+182.696115178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.631651 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6ws5\" (UniqueName: \"kubernetes.io/projected/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-kube-api-access-w6ws5\") pod \"community-operators-l8bmh\" (UID: \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\") " pod="openshift-marketplace/community-operators-l8bmh" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.666092 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8bmh" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.684717 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:01 crc kubenswrapper[4620]: E0129 15:04:01.684905 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:02.184883575 +0000 UTC m=+182.797711220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.685027 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:01 crc kubenswrapper[4620]: E0129 15:04:01.685301 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:02.185294187 +0000 UTC m=+182.798121832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.787334 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:01 crc kubenswrapper[4620]: E0129 15:04:01.787519 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:02.287495229 +0000 UTC m=+182.900322874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.787896 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:01 crc kubenswrapper[4620]: E0129 15:04:01.788217 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:02.288206401 +0000 UTC m=+182.901034046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.888474 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:01 crc kubenswrapper[4620]: E0129 15:04:01.888904 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:02.388889034 +0000 UTC m=+183.001716679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.940883 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:01 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:04:01 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:01 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.940943 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:01 crc kubenswrapper[4620]: I0129 15:04:01.992062 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:01 crc kubenswrapper[4620]: E0129 15:04:01.992361 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:02.492349735 +0000 UTC m=+183.105177380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.069748 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lg725"] Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.093326 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:02 crc kubenswrapper[4620]: E0129 15:04:02.093826 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:02.593807742 +0000 UTC m=+183.206635387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.195043 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:02 crc kubenswrapper[4620]: E0129 15:04:02.195788 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:02.695773195 +0000 UTC m=+183.308600850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.297306 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:02 crc kubenswrapper[4620]: E0129 15:04:02.297611 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:02.797595915 +0000 UTC m=+183.410423560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.334253 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lg725" event={"ID":"2aaa7ad3-f4ef-4a15-993d-43166028f71b","Type":"ContainerStarted","Data":"95c222983709117248415b7744e61e9241e2b2785ce891797c4dba5a045ffab7"} Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.338458 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" event={"ID":"11cea8f7-7594-45c6-8f0c-77caad6bdbc5","Type":"ContainerStarted","Data":"a1cafe731bc33f1d91def68e36a760da22bc2b57e1f88fcd60b625c762bc04dc"} Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.342368 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f4147aba-e9a4-47c8-8ceb-1a743b34c328","Type":"ContainerStarted","Data":"55535d13c9feadb90c27c83451f76d2e2cc3056da4e826d470b21f79bdacbbb4"} Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.375202 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.37518659 podStartE2EDuration="3.37518659s" podCreationTimestamp="2026-01-29 15:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:04:02.373119275 +0000 UTC m=+182.985946920" watchObservedRunningTime="2026-01-29 15:04:02.37518659 +0000 UTC m=+182.988014235" Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.398511 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:02 crc kubenswrapper[4620]: E0129 15:04:02.399662 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:02.89965032 +0000 UTC m=+183.512477955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.499609 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:02 crc kubenswrapper[4620]: E0129 15:04:02.499821 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:02.999789827 +0000 UTC m=+183.612617472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.500275 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:02 crc kubenswrapper[4620]: E0129 15:04:02.500730 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:03.000714876 +0000 UTC m=+183.613542531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.603364 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:02 crc kubenswrapper[4620]: E0129 15:04:02.603868 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:03.103850326 +0000 UTC m=+183.716677971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.677745 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qlbvc" Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.692554 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l8bmh"] Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.704576 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:02 crc kubenswrapper[4620]: E0129 15:04:02.704957 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:03.204945332 +0000 UTC m=+183.817772977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.721883 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzcfs"] Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.744695 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.806242 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:02 crc kubenswrapper[4620]: E0129 15:04:02.807361 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:03.3073425 +0000 UTC m=+183.920170145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.822599 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jn68n"] Jan 29 15:04:02 crc kubenswrapper[4620]: W0129 15:04:02.842361 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67739a3a_d009_4685_a79a_aaa81f5b2daf.slice/crio-8e5f699266add6fa52314306b97113658e6aa4211884dc8febbec0cf3000099f WatchSource:0}: Error finding container 8e5f699266add6fa52314306b97113658e6aa4211884dc8febbec0cf3000099f: Status 404 returned error can't find the container with id 8e5f699266add6fa52314306b97113658e6aa4211884dc8febbec0cf3000099f Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.900574 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mn8xm"] Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.901694 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn8xm" Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.912265 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.913399 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn8xm"] Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.913411 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:02 crc kubenswrapper[4620]: E0129 15:04:02.913618 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:03.413608678 +0000 UTC m=+184.026436323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.940556 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:02 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:04:02 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:02 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:02 crc kubenswrapper[4620]: I0129 15:04:02.940953 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.017318 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.020817 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:03.520793686 +0000 UTC m=+184.133621321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.020900 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ced9400-3ad4-4717-ab18-327d5e40daa4-utilities\") pod \"redhat-marketplace-mn8xm\" (UID: \"8ced9400-3ad4-4717-ab18-327d5e40daa4\") " pod="openshift-marketplace/redhat-marketplace-mn8xm" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.020951 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.020976 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ced9400-3ad4-4717-ab18-327d5e40daa4-catalog-content\") pod \"redhat-marketplace-mn8xm\" (UID: \"8ced9400-3ad4-4717-ab18-327d5e40daa4\") " pod="openshift-marketplace/redhat-marketplace-mn8xm" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.020993 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktvhb\" (UniqueName: \"kubernetes.io/projected/8ced9400-3ad4-4717-ab18-327d5e40daa4-kube-api-access-ktvhb\") pod \"redhat-marketplace-mn8xm\" (UID: \"8ced9400-3ad4-4717-ab18-327d5e40daa4\") " pod="openshift-marketplace/redhat-marketplace-mn8xm" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.021263 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:03.52125613 +0000 UTC m=+184.134083775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.124471 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.124601 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktvhb\" (UniqueName: \"kubernetes.io/projected/8ced9400-3ad4-4717-ab18-327d5e40daa4-kube-api-access-ktvhb\") pod \"redhat-marketplace-mn8xm\" (UID: \"8ced9400-3ad4-4717-ab18-327d5e40daa4\") " pod="openshift-marketplace/redhat-marketplace-mn8xm" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.124681 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ced9400-3ad4-4717-ab18-327d5e40daa4-utilities\") pod \"redhat-marketplace-mn8xm\" (UID: \"8ced9400-3ad4-4717-ab18-327d5e40daa4\") " pod="openshift-marketplace/redhat-marketplace-mn8xm" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.124733 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ced9400-3ad4-4717-ab18-327d5e40daa4-catalog-content\") pod \"redhat-marketplace-mn8xm\" (UID: \"8ced9400-3ad4-4717-ab18-327d5e40daa4\") " pod="openshift-marketplace/redhat-marketplace-mn8xm" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.125169 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ced9400-3ad4-4717-ab18-327d5e40daa4-catalog-content\") pod \"redhat-marketplace-mn8xm\" (UID: \"8ced9400-3ad4-4717-ab18-327d5e40daa4\") " pod="openshift-marketplace/redhat-marketplace-mn8xm" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.125247 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:03.625232047 +0000 UTC m=+184.238059692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.125698 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ced9400-3ad4-4717-ab18-327d5e40daa4-utilities\") pod \"redhat-marketplace-mn8xm\" (UID: \"8ced9400-3ad4-4717-ab18-327d5e40daa4\") " pod="openshift-marketplace/redhat-marketplace-mn8xm" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.160242 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktvhb\" (UniqueName: \"kubernetes.io/projected/8ced9400-3ad4-4717-ab18-327d5e40daa4-kube-api-access-ktvhb\") pod \"redhat-marketplace-mn8xm\" (UID: \"8ced9400-3ad4-4717-ab18-327d5e40daa4\") " pod="openshift-marketplace/redhat-marketplace-mn8xm" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.226133 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.226544 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:03.72653118 +0000 UTC m=+184.339358825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.234720 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn8xm" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.275544 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xwr75"] Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.276551 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xwr75" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.327401 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.327592 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbrp7\" (UniqueName: \"kubernetes.io/projected/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-kube-api-access-xbrp7\") pod \"redhat-marketplace-xwr75\" (UID: \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\") " pod="openshift-marketplace/redhat-marketplace-xwr75" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.327620 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-utilities\") pod \"redhat-marketplace-xwr75\" (UID: \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\") " pod="openshift-marketplace/redhat-marketplace-xwr75" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.327665 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-catalog-content\") pod \"redhat-marketplace-xwr75\" (UID: \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\") " pod="openshift-marketplace/redhat-marketplace-xwr75" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.327833 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:03.827819532 +0000 UTC m=+184.440647167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.340266 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xwr75"] Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.360307 4620 generic.go:334] "Generic (PLEG): container finished" podID="2aaa7ad3-f4ef-4a15-993d-43166028f71b" containerID="1b935450c9063427d649f9e2d69f15f84dfc4064df6f628097f2ba9abb059d69" exitCode=0 Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.360423 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lg725" event={"ID":"2aaa7ad3-f4ef-4a15-993d-43166028f71b","Type":"ContainerDied","Data":"1b935450c9063427d649f9e2d69f15f84dfc4064df6f628097f2ba9abb059d69"} Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.365409 4620 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.381683 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"03c75101-bbca-4251-9357-a834826c89bd","Type":"ContainerStarted","Data":"b6206c65918a37e76e068b33936732cc29a6a3f93f5f1bb7f9fef279f2e016db"} Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.394542 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" event={"ID":"11cea8f7-7594-45c6-8f0c-77caad6bdbc5","Type":"ContainerStarted","Data":"1cae267adbdfdd5e7e9146e36f4177efbbd12b331b9a9b5b670337b22c67b926"} Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.412041 4620 generic.go:334] "Generic (PLEG): container finished" podID="f4147aba-e9a4-47c8-8ceb-1a743b34c328" containerID="55535d13c9feadb90c27c83451f76d2e2cc3056da4e826d470b21f79bdacbbb4" exitCode=0 Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.412108 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f4147aba-e9a4-47c8-8ceb-1a743b34c328","Type":"ContainerDied","Data":"55535d13c9feadb90c27c83451f76d2e2cc3056da4e826d470b21f79bdacbbb4"} Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.428717 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-utilities\") pod \"redhat-marketplace-xwr75\" (UID: \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\") " pod="openshift-marketplace/redhat-marketplace-xwr75" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.428800 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-catalog-content\") pod \"redhat-marketplace-xwr75\" (UID: \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\") " pod="openshift-marketplace/redhat-marketplace-xwr75" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.428895 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.428923 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbrp7\" (UniqueName: \"kubernetes.io/projected/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-kube-api-access-xbrp7\") pod \"redhat-marketplace-xwr75\" (UID: \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\") " pod="openshift-marketplace/redhat-marketplace-xwr75" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.429515 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-utilities\") pod \"redhat-marketplace-xwr75\" (UID: \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\") " pod="openshift-marketplace/redhat-marketplace-xwr75" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.429731 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-catalog-content\") pod \"redhat-marketplace-xwr75\" (UID: \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\") " pod="openshift-marketplace/redhat-marketplace-xwr75" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.429766 4620 generic.go:334] "Generic (PLEG): container finished" podID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" containerID="98c519773309d38c7105fc9e21e1893fc56d3495d84172f52e3f71bcd2105f41" exitCode=0 Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.429888 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzcfs" event={"ID":"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0","Type":"ContainerDied","Data":"98c519773309d38c7105fc9e21e1893fc56d3495d84172f52e3f71bcd2105f41"} Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.429917 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzcfs" event={"ID":"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0","Type":"ContainerStarted","Data":"0f9aed9c8828b6e20f852a5d135d30aae116eed52d516d1205d2b26df403fb02"} Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.430540 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:03.930528799 +0000 UTC m=+184.543356444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.442171 4620 generic.go:334] "Generic (PLEG): container finished" podID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" containerID="2c66295885ddd88737f1d0c53ab10c4bddc1e2e00647c2b8ca4b8c1ec65be911" exitCode=0 Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.442274 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8bmh" event={"ID":"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2","Type":"ContainerDied","Data":"2c66295885ddd88737f1d0c53ab10c4bddc1e2e00647c2b8ca4b8c1ec65be911"} Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.442300 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8bmh" event={"ID":"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2","Type":"ContainerStarted","Data":"017ce28f47c8e9d6385e692d617717025f6d464ba9253a3774c124296ac3c309"} Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.448827 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbrp7\" (UniqueName: \"kubernetes.io/projected/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-kube-api-access-xbrp7\") pod \"redhat-marketplace-xwr75\" (UID: \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\") " pod="openshift-marketplace/redhat-marketplace-xwr75" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.456390 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn68n" event={"ID":"67739a3a-d009-4685-a79a-aaa81f5b2daf","Type":"ContainerStarted","Data":"8e5f699266add6fa52314306b97113658e6aa4211884dc8febbec0cf3000099f"} Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.531293 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.532393 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:04.032373329 +0000 UTC m=+184.645200974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.606461 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.606740 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6ws5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l8bmh_openshift-marketplace(0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.608075 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-l8bmh" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.616264 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.616424 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nw6hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mzcfs_openshift-marketplace(d401dd8c-5cfc-4cbd-92de-bb9896e90ea0): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.616633 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.616705 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzm4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lg725_openshift-marketplace(2aaa7ad3-f4ef-4a15-993d-43166028f71b): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.617879 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-lg725" podUID="2aaa7ad3-f4ef-4a15-993d-43166028f71b" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.617938 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-mzcfs" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.621726 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.621824 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5k7hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jn68n_openshift-marketplace(67739a3a-d009-4685-a79a-aaa81f5b2daf): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.623582 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-jn68n" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.633952 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.634270 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:04.134257089 +0000 UTC m=+184.747084734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.666170 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xwr75" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.712523 4620 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.735478 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.735878 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:04.235861921 +0000 UTC m=+184.848689566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.836615 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.836923 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:04.336902446 +0000 UTC m=+184.949730091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.880027 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tqw2q"] Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.881302 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqw2q" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.883253 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn8xm"] Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.894650 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.908248 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tqw2q"] Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.938685 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.938939 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f8b2b8-8396-425a-94ac-e66deddac937-catalog-content\") pod \"redhat-operators-tqw2q\" (UID: \"a7f8b2b8-8396-425a-94ac-e66deddac937\") " pod="openshift-marketplace/redhat-operators-tqw2q" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.939041 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqh7x\" (UniqueName: \"kubernetes.io/projected/a7f8b2b8-8396-425a-94ac-e66deddac937-kube-api-access-cqh7x\") pod \"redhat-operators-tqw2q\" (UID: \"a7f8b2b8-8396-425a-94ac-e66deddac937\") " pod="openshift-marketplace/redhat-operators-tqw2q" Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.939082 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f8b2b8-8396-425a-94ac-e66deddac937-utilities\") pod \"redhat-operators-tqw2q\" (UID: \"a7f8b2b8-8396-425a-94ac-e66deddac937\") " pod="openshift-marketplace/redhat-operators-tqw2q" Jan 29 15:04:03 crc kubenswrapper[4620]: E0129 15:04:03.939182 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:04.439167528 +0000 UTC m=+185.051995163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.945243 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:03 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:04:03 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:03 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:03 crc kubenswrapper[4620]: I0129 15:04:03.945295 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.040633 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqh7x\" (UniqueName: \"kubernetes.io/projected/a7f8b2b8-8396-425a-94ac-e66deddac937-kube-api-access-cqh7x\") pod \"redhat-operators-tqw2q\" (UID: \"a7f8b2b8-8396-425a-94ac-e66deddac937\") " pod="openshift-marketplace/redhat-operators-tqw2q" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.041022 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f8b2b8-8396-425a-94ac-e66deddac937-utilities\") pod \"redhat-operators-tqw2q\" (UID: \"a7f8b2b8-8396-425a-94ac-e66deddac937\") " pod="openshift-marketplace/redhat-operators-tqw2q" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.041072 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.041097 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f8b2b8-8396-425a-94ac-e66deddac937-catalog-content\") pod \"redhat-operators-tqw2q\" (UID: \"a7f8b2b8-8396-425a-94ac-e66deddac937\") " pod="openshift-marketplace/redhat-operators-tqw2q" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.041646 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f8b2b8-8396-425a-94ac-e66deddac937-catalog-content\") pod \"redhat-operators-tqw2q\" (UID: \"a7f8b2b8-8396-425a-94ac-e66deddac937\") " pod="openshift-marketplace/redhat-operators-tqw2q" Jan 29 15:04:04 crc kubenswrapper[4620]: E0129 15:04:04.041964 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:04:04.541946178 +0000 UTC m=+185.154773823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tzmcd" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.040945 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xwr75"] Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.043176 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f8b2b8-8396-425a-94ac-e66deddac937-utilities\") pod \"redhat-operators-tqw2q\" (UID: \"a7f8b2b8-8396-425a-94ac-e66deddac937\") " pod="openshift-marketplace/redhat-operators-tqw2q" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.104976 4620 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T15:04:03.712545666Z","Handler":null,"Name":""} Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.105628 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqh7x\" (UniqueName: \"kubernetes.io/projected/a7f8b2b8-8396-425a-94ac-e66deddac937-kube-api-access-cqh7x\") pod \"redhat-operators-tqw2q\" (UID: \"a7f8b2b8-8396-425a-94ac-e66deddac937\") " pod="openshift-marketplace/redhat-operators-tqw2q" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.120882 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.120959 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.143220 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:04 crc kubenswrapper[4620]: E0129 15:04:04.143602 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:04:04.643587941 +0000 UTC m=+185.256415586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.189450 4620 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.189502 4620 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.214050 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqw2q" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.244277 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.250232 4620 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.250403 4620 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.283535 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-669zb"] Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.285190 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-669zb" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.296955 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-669zb"] Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.331258 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tzmcd\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.346390 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.346736 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d03ebd-82d4-4ffe-895e-2909c15480d7-utilities\") pod \"redhat-operators-669zb\" (UID: \"60d03ebd-82d4-4ffe-895e-2909c15480d7\") " pod="openshift-marketplace/redhat-operators-669zb" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.346784 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8jkp\" (UniqueName: \"kubernetes.io/projected/60d03ebd-82d4-4ffe-895e-2909c15480d7-kube-api-access-c8jkp\") pod \"redhat-operators-669zb\" (UID: \"60d03ebd-82d4-4ffe-895e-2909c15480d7\") " pod="openshift-marketplace/redhat-operators-669zb" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.346842 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d03ebd-82d4-4ffe-895e-2909c15480d7-catalog-content\") pod \"redhat-operators-669zb\" (UID: \"60d03ebd-82d4-4ffe-895e-2909c15480d7\") " pod="openshift-marketplace/redhat-operators-669zb" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.378746 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.447858 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d03ebd-82d4-4ffe-895e-2909c15480d7-catalog-content\") pod \"redhat-operators-669zb\" (UID: \"60d03ebd-82d4-4ffe-895e-2909c15480d7\") " pod="openshift-marketplace/redhat-operators-669zb" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.447971 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d03ebd-82d4-4ffe-895e-2909c15480d7-utilities\") pod \"redhat-operators-669zb\" (UID: \"60d03ebd-82d4-4ffe-895e-2909c15480d7\") " pod="openshift-marketplace/redhat-operators-669zb" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.447993 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8jkp\" (UniqueName: \"kubernetes.io/projected/60d03ebd-82d4-4ffe-895e-2909c15480d7-kube-api-access-c8jkp\") pod \"redhat-operators-669zb\" (UID: \"60d03ebd-82d4-4ffe-895e-2909c15480d7\") " pod="openshift-marketplace/redhat-operators-669zb" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.448426 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d03ebd-82d4-4ffe-895e-2909c15480d7-catalog-content\") pod \"redhat-operators-669zb\" (UID: \"60d03ebd-82d4-4ffe-895e-2909c15480d7\") " pod="openshift-marketplace/redhat-operators-669zb" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.448481 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d03ebd-82d4-4ffe-895e-2909c15480d7-utilities\") pod \"redhat-operators-669zb\" (UID: \"60d03ebd-82d4-4ffe-895e-2909c15480d7\") " pod="openshift-marketplace/redhat-operators-669zb" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.460409 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.463539 4620 generic.go:334] "Generic (PLEG): container finished" podID="67739a3a-d009-4685-a79a-aaa81f5b2daf" containerID="dc1a5f100e79466a9ee303d85d7acb2ea5bbfe60a971d6da5d9f1280672c59fc" exitCode=0 Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.463614 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn68n" event={"ID":"67739a3a-d009-4685-a79a-aaa81f5b2daf","Type":"ContainerDied","Data":"dc1a5f100e79466a9ee303d85d7acb2ea5bbfe60a971d6da5d9f1280672c59fc"} Jan 29 15:04:04 crc kubenswrapper[4620]: E0129 15:04:04.465476 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jn68n" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.467784 4620 generic.go:334] "Generic (PLEG): container finished" podID="8ced9400-3ad4-4717-ab18-327d5e40daa4" containerID="75924686e6fbad9a59207802306d07d25f7e2c69cf511744e83bd917d018dc04" exitCode=0 Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.467845 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn8xm" event={"ID":"8ced9400-3ad4-4717-ab18-327d5e40daa4","Type":"ContainerDied","Data":"75924686e6fbad9a59207802306d07d25f7e2c69cf511744e83bd917d018dc04"} Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.467871 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn8xm" event={"ID":"8ced9400-3ad4-4717-ab18-327d5e40daa4","Type":"ContainerStarted","Data":"0454eb1943a838e1e5e74919528a14d4a9ec62ebd2ad003ab1421642ed8d3deb"} Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.481138 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8jkp\" (UniqueName: \"kubernetes.io/projected/60d03ebd-82d4-4ffe-895e-2909c15480d7-kube-api-access-c8jkp\") pod \"redhat-operators-669zb\" (UID: \"60d03ebd-82d4-4ffe-895e-2909c15480d7\") " pod="openshift-marketplace/redhat-operators-669zb" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.482984 4620 generic.go:334] "Generic (PLEG): container finished" podID="03c75101-bbca-4251-9357-a834826c89bd" containerID="f54ecbf447245754a7031952f7b468cd24fe5bb79f128d46a726eda15600782e" exitCode=0 Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.483076 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"03c75101-bbca-4251-9357-a834826c89bd","Type":"ContainerDied","Data":"f54ecbf447245754a7031952f7b468cd24fe5bb79f128d46a726eda15600782e"} Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.483407 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tqw2q"] Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.487870 4620 generic.go:334] "Generic (PLEG): container finished" podID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" containerID="b9f420004e20579ac5d0621693eb8452ec2d9beb6aa254c1c00db93d7f0b29b0" exitCode=0 Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.488732 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xwr75" event={"ID":"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff","Type":"ContainerDied","Data":"b9f420004e20579ac5d0621693eb8452ec2d9beb6aa254c1c00db93d7f0b29b0"} Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.488799 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xwr75" event={"ID":"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff","Type":"ContainerStarted","Data":"40627e01719de76b933acc4bef8d86eb5d58ea9de485a55cbc01a63d942c6ea8"} Jan 29 15:04:04 crc kubenswrapper[4620]: W0129 15:04:04.495506 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7f8b2b8_8396_425a_94ac_e66deddac937.slice/crio-b6f6c81bfa5c5d6f70a4b1f498caa9fbd5aa82cf7e477093cbf6690c029b47cd WatchSource:0}: Error finding container b6f6c81bfa5c5d6f70a4b1f498caa9fbd5aa82cf7e477093cbf6690c029b47cd: Status 404 returned error can't find the container with id b6f6c81bfa5c5d6f70a4b1f498caa9fbd5aa82cf7e477093cbf6690c029b47cd Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.496440 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" event={"ID":"11cea8f7-7594-45c6-8f0c-77caad6bdbc5","Type":"ContainerStarted","Data":"ee8bdbcc56999ccffcb80c90f6dea9ebae7b95dee55820803e622d6c145dbe66"} Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.499888 4620 generic.go:334] "Generic (PLEG): container finished" podID="713c81b7-8d56-4d58-bd4e-f827de0ca17b" containerID="502515f1acee8bd32ea86f0d38418ac91f76362106c55fc1f4f18e1e3902ff4f" exitCode=0 Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.500046 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" event={"ID":"713c81b7-8d56-4d58-bd4e-f827de0ca17b","Type":"ContainerDied","Data":"502515f1acee8bd32ea86f0d38418ac91f76362106c55fc1f4f18e1e3902ff4f"} Jan 29 15:04:04 crc kubenswrapper[4620]: E0129 15:04:04.502375 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mzcfs" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" Jan 29 15:04:04 crc kubenswrapper[4620]: E0129 15:04:04.502443 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lg725" podUID="2aaa7ad3-f4ef-4a15-993d-43166028f71b" Jan 29 15:04:04 crc kubenswrapper[4620]: E0129 15:04:04.508103 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l8bmh" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" Jan 29 15:04:04 crc kubenswrapper[4620]: E0129 15:04:04.604262 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:04:04 crc kubenswrapper[4620]: E0129 15:04:04.604433 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktvhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mn8xm_openshift-marketplace(8ced9400-3ad4-4717-ab18-327d5e40daa4): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:04 crc kubenswrapper[4620]: E0129 15:04:04.605830 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-mn8xm" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.612153 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.620506 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.633700 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-lzb7h" podStartSLOduration=16.633678476 podStartE2EDuration="16.633678476s" podCreationTimestamp="2026-01-29 15:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:04:04.633533951 +0000 UTC m=+185.246361616" watchObservedRunningTime="2026-01-29 15:04:04.633678476 +0000 UTC m=+185.246506121" Jan 29 15:04:04 crc kubenswrapper[4620]: E0129 15:04:04.647003 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:04:04 crc kubenswrapper[4620]: E0129 15:04:04.647176 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xbrp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xwr75_openshift-marketplace(2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:04 crc kubenswrapper[4620]: E0129 15:04:04.649375 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-xwr75" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.670195 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-669zb" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.826636 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.864849 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4147aba-e9a4-47c8-8ceb-1a743b34c328-kubelet-dir\") pod \"f4147aba-e9a4-47c8-8ceb-1a743b34c328\" (UID: \"f4147aba-e9a4-47c8-8ceb-1a743b34c328\") " Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.865233 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4147aba-e9a4-47c8-8ceb-1a743b34c328-kube-api-access\") pod \"f4147aba-e9a4-47c8-8ceb-1a743b34c328\" (UID: \"f4147aba-e9a4-47c8-8ceb-1a743b34c328\") " Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.866273 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4147aba-e9a4-47c8-8ceb-1a743b34c328-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f4147aba-e9a4-47c8-8ceb-1a743b34c328" (UID: "f4147aba-e9a4-47c8-8ceb-1a743b34c328"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.890070 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4147aba-e9a4-47c8-8ceb-1a743b34c328-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f4147aba-e9a4-47c8-8ceb-1a743b34c328" (UID: "f4147aba-e9a4-47c8-8ceb-1a743b34c328"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.916660 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.942427 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:04 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:04:04 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:04 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.942514 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.969735 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4147aba-e9a4-47c8-8ceb-1a743b34c328-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:04 crc kubenswrapper[4620]: I0129 15:04:04.969893 4620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4147aba-e9a4-47c8-8ceb-1a743b34c328-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.055585 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-669zb"] Jan 29 15:04:05 crc kubenswrapper[4620]: W0129 15:04:05.066470 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60d03ebd_82d4_4ffe_895e_2909c15480d7.slice/crio-191136b7a1d61a5abcc8674069506d85636f4c97724afe1b72c00672db610d71 WatchSource:0}: Error finding container 191136b7a1d61a5abcc8674069506d85636f4c97724afe1b72c00672db610d71: Status 404 returned error can't find the container with id 191136b7a1d61a5abcc8674069506d85636f4c97724afe1b72c00672db610d71 Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.119732 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tzmcd"] Jan 29 15:04:05 crc kubenswrapper[4620]: W0129 15:04:05.138393 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e8c35b0_3703_4eff_8610_6933e4b7b391.slice/crio-b0840e1b6289307fb42d519cce61ff26d4e26efe36c6ea30bb0c27a2c3b68a28 WatchSource:0}: Error finding container b0840e1b6289307fb42d519cce61ff26d4e26efe36c6ea30bb0c27a2c3b68a28: Status 404 returned error can't find the container with id b0840e1b6289307fb42d519cce61ff26d4e26efe36c6ea30bb0c27a2c3b68a28 Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.507368 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" event={"ID":"8e8c35b0-3703-4eff-8610-6933e4b7b391","Type":"ContainerStarted","Data":"47b9e9f66dd0c2e49eaabd2b5d73dd51ce08a7e9f337425bbcd16e6d262e2ab7"} Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.507685 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" event={"ID":"8e8c35b0-3703-4eff-8610-6933e4b7b391","Type":"ContainerStarted","Data":"b0840e1b6289307fb42d519cce61ff26d4e26efe36c6ea30bb0c27a2c3b68a28"} Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.507705 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.510648 4620 generic.go:334] "Generic (PLEG): container finished" podID="60d03ebd-82d4-4ffe-895e-2909c15480d7" containerID="985b6cd289d66eaa145d898a232a4d6969115e51f55356fd56f8e6b069eceb63" exitCode=0 Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.510872 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-669zb" event={"ID":"60d03ebd-82d4-4ffe-895e-2909c15480d7","Type":"ContainerDied","Data":"985b6cd289d66eaa145d898a232a4d6969115e51f55356fd56f8e6b069eceb63"} Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.510930 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-669zb" event={"ID":"60d03ebd-82d4-4ffe-895e-2909c15480d7","Type":"ContainerStarted","Data":"191136b7a1d61a5abcc8674069506d85636f4c97724afe1b72c00672db610d71"} Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.513273 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f4147aba-e9a4-47c8-8ceb-1a743b34c328","Type":"ContainerDied","Data":"185f006d72248f90f06e5da8b8f04fe7eb9b134d4c8667dfc6739b333b14e15c"} Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.513305 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="185f006d72248f90f06e5da8b8f04fe7eb9b134d4c8667dfc6739b333b14e15c" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.513319 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.517035 4620 generic.go:334] "Generic (PLEG): container finished" podID="a7f8b2b8-8396-425a-94ac-e66deddac937" containerID="f226daf90f70d8f8499c4862141bc4e37ee135f9a4207cee4c45d1914b6d8410" exitCode=0 Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.517668 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqw2q" event={"ID":"a7f8b2b8-8396-425a-94ac-e66deddac937","Type":"ContainerDied","Data":"f226daf90f70d8f8499c4862141bc4e37ee135f9a4207cee4c45d1914b6d8410"} Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.517878 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqw2q" event={"ID":"a7f8b2b8-8396-425a-94ac-e66deddac937","Type":"ContainerStarted","Data":"b6f6c81bfa5c5d6f70a4b1f498caa9fbd5aa82cf7e477093cbf6690c029b47cd"} Jan 29 15:04:05 crc kubenswrapper[4620]: E0129 15:04:05.518395 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xwr75" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" Jan 29 15:04:05 crc kubenswrapper[4620]: E0129 15:04:05.521219 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jn68n" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" Jan 29 15:04:05 crc kubenswrapper[4620]: E0129 15:04:05.521879 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mn8xm" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.564922 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" podStartSLOduration=157.564899984 podStartE2EDuration="2m37.564899984s" podCreationTimestamp="2026-01-29 15:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:04:05.538066358 +0000 UTC m=+186.150894023" watchObservedRunningTime="2026-01-29 15:04:05.564899984 +0000 UTC m=+186.177727629" Jan 29 15:04:05 crc kubenswrapper[4620]: E0129 15:04:05.648909 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:04:05 crc kubenswrapper[4620]: E0129 15:04:05.649069 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8jkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-669zb_openshift-marketplace(60d03ebd-82d4-4ffe-895e-2909c15480d7): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:05 crc kubenswrapper[4620]: E0129 15:04:05.653972 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-669zb" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" Jan 29 15:04:05 crc kubenswrapper[4620]: E0129 15:04:05.659634 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:04:05 crc kubenswrapper[4620]: E0129 15:04:05.659798 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqh7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tqw2q_openshift-marketplace(a7f8b2b8-8396-425a-94ac-e66deddac937): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:05 crc kubenswrapper[4620]: E0129 15:04:05.661530 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-tqw2q" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.687072 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.691985 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-x75k8" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.762445 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.813544 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.885117 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/713c81b7-8d56-4d58-bd4e-f827de0ca17b-secret-volume\") pod \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\" (UID: \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\") " Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.885217 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c75101-bbca-4251-9357-a834826c89bd-kube-api-access\") pod \"03c75101-bbca-4251-9357-a834826c89bd\" (UID: \"03c75101-bbca-4251-9357-a834826c89bd\") " Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.885274 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrp29\" (UniqueName: \"kubernetes.io/projected/713c81b7-8d56-4d58-bd4e-f827de0ca17b-kube-api-access-hrp29\") pod \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\" (UID: \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\") " Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.885303 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c75101-bbca-4251-9357-a834826c89bd-kubelet-dir\") pod \"03c75101-bbca-4251-9357-a834826c89bd\" (UID: \"03c75101-bbca-4251-9357-a834826c89bd\") " Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.885385 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/713c81b7-8d56-4d58-bd4e-f827de0ca17b-config-volume\") pod \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\" (UID: \"713c81b7-8d56-4d58-bd4e-f827de0ca17b\") " Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.886433 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/713c81b7-8d56-4d58-bd4e-f827de0ca17b-config-volume" (OuterVolumeSpecName: "config-volume") pod "713c81b7-8d56-4d58-bd4e-f827de0ca17b" (UID: "713c81b7-8d56-4d58-bd4e-f827de0ca17b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.886483 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03c75101-bbca-4251-9357-a834826c89bd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "03c75101-bbca-4251-9357-a834826c89bd" (UID: "03c75101-bbca-4251-9357-a834826c89bd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.890490 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c75101-bbca-4251-9357-a834826c89bd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "03c75101-bbca-4251-9357-a834826c89bd" (UID: "03c75101-bbca-4251-9357-a834826c89bd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.891121 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/713c81b7-8d56-4d58-bd4e-f827de0ca17b-kube-api-access-hrp29" (OuterVolumeSpecName: "kube-api-access-hrp29") pod "713c81b7-8d56-4d58-bd4e-f827de0ca17b" (UID: "713c81b7-8d56-4d58-bd4e-f827de0ca17b"). InnerVolumeSpecName "kube-api-access-hrp29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.891988 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/713c81b7-8d56-4d58-bd4e-f827de0ca17b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "713c81b7-8d56-4d58-bd4e-f827de0ca17b" (UID: "713c81b7-8d56-4d58-bd4e-f827de0ca17b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.937458 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:05 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:04:05 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:05 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.937536 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.986947 4620 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/713c81b7-8d56-4d58-bd4e-f827de0ca17b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.986977 4620 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/713c81b7-8d56-4d58-bd4e-f827de0ca17b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.986986 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03c75101-bbca-4251-9357-a834826c89bd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.986995 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrp29\" (UniqueName: \"kubernetes.io/projected/713c81b7-8d56-4d58-bd4e-f827de0ca17b-kube-api-access-hrp29\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:05 crc kubenswrapper[4620]: I0129 15:04:05.987005 4620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03c75101-bbca-4251-9357-a834826c89bd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:06 crc kubenswrapper[4620]: I0129 15:04:06.166711 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bgtck" Jan 29 15:04:06 crc kubenswrapper[4620]: I0129 15:04:06.523581 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:04:06 crc kubenswrapper[4620]: I0129 15:04:06.523729 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"03c75101-bbca-4251-9357-a834826c89bd","Type":"ContainerDied","Data":"b6206c65918a37e76e068b33936732cc29a6a3f93f5f1bb7f9fef279f2e016db"} Jan 29 15:04:06 crc kubenswrapper[4620]: I0129 15:04:06.523812 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6206c65918a37e76e068b33936732cc29a6a3f93f5f1bb7f9fef279f2e016db" Jan 29 15:04:06 crc kubenswrapper[4620]: E0129 15:04:06.527797 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tqw2q" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" Jan 29 15:04:06 crc kubenswrapper[4620]: E0129 15:04:06.527981 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-669zb" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" Jan 29 15:04:06 crc kubenswrapper[4620]: I0129 15:04:06.528054 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" Jan 29 15:04:06 crc kubenswrapper[4620]: I0129 15:04:06.528276 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9" event={"ID":"713c81b7-8d56-4d58-bd4e-f827de0ca17b","Type":"ContainerDied","Data":"1479c4d184a9b3b388c649d6224706a5ccddd29bf2155081add9fd2fe06223eb"} Jan 29 15:04:06 crc kubenswrapper[4620]: I0129 15:04:06.528342 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1479c4d184a9b3b388c649d6224706a5ccddd29bf2155081add9fd2fe06223eb" Jan 29 15:04:06 crc kubenswrapper[4620]: I0129 15:04:06.937418 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:06 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:04:06 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:06 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:06 crc kubenswrapper[4620]: I0129 15:04:06.937571 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:07 crc kubenswrapper[4620]: I0129 15:04:07.936979 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:07 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:04:07 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:07 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:07 crc kubenswrapper[4620]: I0129 15:04:07.937044 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:08 crc kubenswrapper[4620]: I0129 15:04:08.940772 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:08 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:04:08 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:08 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:08 crc kubenswrapper[4620]: I0129 15:04:08.940827 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:09 crc kubenswrapper[4620]: I0129 15:04:09.937657 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:09 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:04:09 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:09 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:09 crc kubenswrapper[4620]: I0129 15:04:09.937720 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:10 crc kubenswrapper[4620]: I0129 15:04:10.523412 4620 patch_prober.go:28] interesting pod/console-f9d7485db-z57gf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 29 15:04:10 crc kubenswrapper[4620]: I0129 15:04:10.523467 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-z57gf" podUID="aa662f18-6ab4-43b8-8e65-8de41043b74d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 29 15:04:10 crc kubenswrapper[4620]: I0129 15:04:10.551072 4620 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwr5k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 29 15:04:10 crc kubenswrapper[4620]: I0129 15:04:10.551338 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwr5k" podUID="8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 29 15:04:10 crc kubenswrapper[4620]: I0129 15:04:10.551169 4620 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwr5k container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 29 15:04:10 crc kubenswrapper[4620]: I0129 15:04:10.551561 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gwr5k" podUID="8a1b8f5b-4123-4cc0-9d2c-dca6ae8453d5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 29 15:04:10 crc kubenswrapper[4620]: I0129 15:04:10.937510 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:10 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:04:10 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:10 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:10 crc kubenswrapper[4620]: I0129 15:04:10.938261 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:11 crc kubenswrapper[4620]: I0129 15:04:11.936838 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:11 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:04:11 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:11 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:11 crc kubenswrapper[4620]: I0129 15:04:11.936944 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:12 crc kubenswrapper[4620]: I0129 15:04:12.937374 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:12 crc kubenswrapper[4620]: [-]has-synced failed: reason withheld Jan 29 15:04:12 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:12 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:12 crc kubenswrapper[4620]: I0129 15:04:12.937664 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:13 crc kubenswrapper[4620]: I0129 15:04:13.937504 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:04:13 crc kubenswrapper[4620]: [+]has-synced ok Jan 29 15:04:13 crc kubenswrapper[4620]: [+]process-running ok Jan 29 15:04:13 crc kubenswrapper[4620]: healthz check failed Jan 29 15:04:13 crc kubenswrapper[4620]: I0129 15:04:13.937886 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:04:14 crc kubenswrapper[4620]: I0129 15:04:14.937839 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:04:14 crc kubenswrapper[4620]: I0129 15:04:14.944528 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5ppqg" Jan 29 15:04:16 crc kubenswrapper[4620]: E0129 15:04:16.004911 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:04:16 crc kubenswrapper[4620]: E0129 15:04:16.005791 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzm4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lg725_openshift-marketplace(2aaa7ad3-f4ef-4a15-993d-43166028f71b): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:16 crc kubenswrapper[4620]: E0129 15:04:16.007008 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-lg725" podUID="2aaa7ad3-f4ef-4a15-993d-43166028f71b" Jan 29 15:04:16 crc kubenswrapper[4620]: I0129 15:04:16.470774 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2j567"] Jan 29 15:04:16 crc kubenswrapper[4620]: I0129 15:04:16.470992 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" podUID="f9d96dcf-b094-485c-8636-401ccc71e918" containerName="controller-manager" containerID="cri-o://6a85540471008882969c933ba22ea3df1c58905bfb7812da960ace1c8418b25e" gracePeriod=30 Jan 29 15:04:16 crc kubenswrapper[4620]: I0129 15:04:16.489329 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb"] Jan 29 15:04:16 crc kubenswrapper[4620]: I0129 15:04:16.489532 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" podUID="58f39c40-ed87-43dd-90d0-d892d4a56375" containerName="route-controller-manager" containerID="cri-o://0c3ba8b41a42de2e08920d865a9072841f7a9e29e605ad2e035f4d7f8bc432ca" gracePeriod=30 Jan 29 15:04:17 crc kubenswrapper[4620]: I0129 15:04:17.581843 4620 generic.go:334] "Generic (PLEG): container finished" podID="f9d96dcf-b094-485c-8636-401ccc71e918" containerID="6a85540471008882969c933ba22ea3df1c58905bfb7812da960ace1c8418b25e" exitCode=0 Jan 29 15:04:17 crc kubenswrapper[4620]: I0129 15:04:17.581894 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" event={"ID":"f9d96dcf-b094-485c-8636-401ccc71e918","Type":"ContainerDied","Data":"6a85540471008882969c933ba22ea3df1c58905bfb7812da960ace1c8418b25e"} Jan 29 15:04:18 crc kubenswrapper[4620]: I0129 15:04:18.589282 4620 generic.go:334] "Generic (PLEG): container finished" podID="58f39c40-ed87-43dd-90d0-d892d4a56375" containerID="0c3ba8b41a42de2e08920d865a9072841f7a9e29e605ad2e035f4d7f8bc432ca" exitCode=0 Jan 29 15:04:18 crc kubenswrapper[4620]: I0129 15:04:18.589434 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" event={"ID":"58f39c40-ed87-43dd-90d0-d892d4a56375","Type":"ContainerDied","Data":"0c3ba8b41a42de2e08920d865a9072841f7a9e29e605ad2e035f4d7f8bc432ca"} Jan 29 15:04:19 crc kubenswrapper[4620]: I0129 15:04:19.118050 4620 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-phzrb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 29 15:04:19 crc kubenswrapper[4620]: I0129 15:04:19.118121 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" podUID="58f39c40-ed87-43dd-90d0-d892d4a56375" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 29 15:04:20 crc kubenswrapper[4620]: I0129 15:04:20.528130 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:04:20 crc kubenswrapper[4620]: I0129 15:04:20.535497 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:04:20 crc kubenswrapper[4620]: I0129 15:04:20.545028 4620 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2j567 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 15:04:20 crc kubenswrapper[4620]: I0129 15:04:20.545065 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" podUID="f9d96dcf-b094-485c-8636-401ccc71e918" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 15:04:20 crc kubenswrapper[4620]: I0129 15:04:20.561296 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-gwr5k" Jan 29 15:04:21 crc kubenswrapper[4620]: I0129 15:04:21.977090 4620 patch_prober.go:28] interesting pod/router-default-5444994796-5ppqg container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 15:04:21 crc kubenswrapper[4620]: I0129 15:04:21.977188 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-5ppqg" podUID="cc4969bb-ac64-4361-8666-99de6de39271" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:04:24 crc kubenswrapper[4620]: I0129 15:04:24.633310 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.704040 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.704616 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xbrp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xwr75_openshift-marketplace(2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.705014 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.705660 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5k7hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jn68n_openshift-marketplace(67739a3a-d009-4685-a79a-aaa81f5b2daf): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.705704 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-xwr75" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.707995 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-jn68n" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.709321 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.709523 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.709685 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqh7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tqw2q_openshift-marketplace(a7f8b2b8-8396-425a-94ac-e66deddac937): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.709909 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8jkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-669zb_openshift-marketplace(60d03ebd-82d4-4ffe-895e-2909c15480d7): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.710920 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-tqw2q" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.712012 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-669zb" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.714142 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.714237 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nw6hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mzcfs_openshift-marketplace(d401dd8c-5cfc-4cbd-92de-bb9896e90ea0): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.715533 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-mzcfs" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.721136 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.721433 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6ws5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l8bmh_openshift-marketplace(0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.723012 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-l8bmh" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.724087 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.724197 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktvhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mn8xm_openshift-marketplace(8ced9400-3ad4-4717-ab18-327d5e40daa4): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:26 crc kubenswrapper[4620]: E0129 15:04:26.725379 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-mn8xm" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.044459 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.066712 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z"] Jan 29 15:04:27 crc kubenswrapper[4620]: E0129 15:04:27.066953 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c75101-bbca-4251-9357-a834826c89bd" containerName="pruner" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.066966 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c75101-bbca-4251-9357-a834826c89bd" containerName="pruner" Jan 29 15:04:27 crc kubenswrapper[4620]: E0129 15:04:27.066979 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4147aba-e9a4-47c8-8ceb-1a743b34c328" containerName="pruner" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.066984 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4147aba-e9a4-47c8-8ceb-1a743b34c328" containerName="pruner" Jan 29 15:04:27 crc kubenswrapper[4620]: E0129 15:04:27.066998 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f39c40-ed87-43dd-90d0-d892d4a56375" containerName="route-controller-manager" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.067003 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f39c40-ed87-43dd-90d0-d892d4a56375" containerName="route-controller-manager" Jan 29 15:04:27 crc kubenswrapper[4620]: E0129 15:04:27.067011 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713c81b7-8d56-4d58-bd4e-f827de0ca17b" containerName="collect-profiles" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.067016 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="713c81b7-8d56-4d58-bd4e-f827de0ca17b" containerName="collect-profiles" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.067111 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="58f39c40-ed87-43dd-90d0-d892d4a56375" containerName="route-controller-manager" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.067119 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c75101-bbca-4251-9357-a834826c89bd" containerName="pruner" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.067125 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="713c81b7-8d56-4d58-bd4e-f827de0ca17b" containerName="collect-profiles" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.067139 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4147aba-e9a4-47c8-8ceb-1a743b34c328" containerName="pruner" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.067670 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.082956 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z"] Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.117836 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.158410 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58f39c40-ed87-43dd-90d0-d892d4a56375-client-ca\") pod \"58f39c40-ed87-43dd-90d0-d892d4a56375\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.158462 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwlxb\" (UniqueName: \"kubernetes.io/projected/58f39c40-ed87-43dd-90d0-d892d4a56375-kube-api-access-pwlxb\") pod \"58f39c40-ed87-43dd-90d0-d892d4a56375\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.158518 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58f39c40-ed87-43dd-90d0-d892d4a56375-config\") pod \"58f39c40-ed87-43dd-90d0-d892d4a56375\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.158575 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58f39c40-ed87-43dd-90d0-d892d4a56375-serving-cert\") pod \"58f39c40-ed87-43dd-90d0-d892d4a56375\" (UID: \"58f39c40-ed87-43dd-90d0-d892d4a56375\") " Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.160611 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f39c40-ed87-43dd-90d0-d892d4a56375-config" (OuterVolumeSpecName: "config") pod "58f39c40-ed87-43dd-90d0-d892d4a56375" (UID: "58f39c40-ed87-43dd-90d0-d892d4a56375"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.160780 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f39c40-ed87-43dd-90d0-d892d4a56375-client-ca" (OuterVolumeSpecName: "client-ca") pod "58f39c40-ed87-43dd-90d0-d892d4a56375" (UID: "58f39c40-ed87-43dd-90d0-d892d4a56375"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.164802 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f39c40-ed87-43dd-90d0-d892d4a56375-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "58f39c40-ed87-43dd-90d0-d892d4a56375" (UID: "58f39c40-ed87-43dd-90d0-d892d4a56375"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.167895 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58f39c40-ed87-43dd-90d0-d892d4a56375-kube-api-access-pwlxb" (OuterVolumeSpecName: "kube-api-access-pwlxb") pod "58f39c40-ed87-43dd-90d0-d892d4a56375" (UID: "58f39c40-ed87-43dd-90d0-d892d4a56375"). InnerVolumeSpecName "kube-api-access-pwlxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.259615 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d96dcf-b094-485c-8636-401ccc71e918-serving-cert\") pod \"f9d96dcf-b094-485c-8636-401ccc71e918\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.259675 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-proxy-ca-bundles\") pod \"f9d96dcf-b094-485c-8636-401ccc71e918\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.259709 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-config\") pod \"f9d96dcf-b094-485c-8636-401ccc71e918\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.259747 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-client-ca\") pod \"f9d96dcf-b094-485c-8636-401ccc71e918\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.259899 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v569s\" (UniqueName: \"kubernetes.io/projected/f9d96dcf-b094-485c-8636-401ccc71e918-kube-api-access-v569s\") pod \"f9d96dcf-b094-485c-8636-401ccc71e918\" (UID: \"f9d96dcf-b094-485c-8636-401ccc71e918\") " Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.260078 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86b6d593-e53b-47ad-9de5-30d2c4b4f410-serving-cert\") pod \"route-controller-manager-56449bd9fb-src7z\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.260115 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86b6d593-e53b-47ad-9de5-30d2c4b4f410-config\") pod \"route-controller-manager-56449bd9fb-src7z\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.260159 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86b6d593-e53b-47ad-9de5-30d2c4b4f410-client-ca\") pod \"route-controller-manager-56449bd9fb-src7z\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.260247 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfpgp\" (UniqueName: \"kubernetes.io/projected/86b6d593-e53b-47ad-9de5-30d2c4b4f410-kube-api-access-xfpgp\") pod \"route-controller-manager-56449bd9fb-src7z\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.260306 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58f39c40-ed87-43dd-90d0-d892d4a56375-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.260324 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58f39c40-ed87-43dd-90d0-d892d4a56375-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.260335 4620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58f39c40-ed87-43dd-90d0-d892d4a56375-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.260345 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwlxb\" (UniqueName: \"kubernetes.io/projected/58f39c40-ed87-43dd-90d0-d892d4a56375-kube-api-access-pwlxb\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.260494 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f9d96dcf-b094-485c-8636-401ccc71e918" (UID: "f9d96dcf-b094-485c-8636-401ccc71e918"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.260798 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-client-ca" (OuterVolumeSpecName: "client-ca") pod "f9d96dcf-b094-485c-8636-401ccc71e918" (UID: "f9d96dcf-b094-485c-8636-401ccc71e918"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.261819 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-config" (OuterVolumeSpecName: "config") pod "f9d96dcf-b094-485c-8636-401ccc71e918" (UID: "f9d96dcf-b094-485c-8636-401ccc71e918"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.262949 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d96dcf-b094-485c-8636-401ccc71e918-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f9d96dcf-b094-485c-8636-401ccc71e918" (UID: "f9d96dcf-b094-485c-8636-401ccc71e918"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.262994 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d96dcf-b094-485c-8636-401ccc71e918-kube-api-access-v569s" (OuterVolumeSpecName: "kube-api-access-v569s") pod "f9d96dcf-b094-485c-8636-401ccc71e918" (UID: "f9d96dcf-b094-485c-8636-401ccc71e918"). InnerVolumeSpecName "kube-api-access-v569s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.361499 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfpgp\" (UniqueName: \"kubernetes.io/projected/86b6d593-e53b-47ad-9de5-30d2c4b4f410-kube-api-access-xfpgp\") pod \"route-controller-manager-56449bd9fb-src7z\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.361985 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86b6d593-e53b-47ad-9de5-30d2c4b4f410-serving-cert\") pod \"route-controller-manager-56449bd9fb-src7z\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.362015 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86b6d593-e53b-47ad-9de5-30d2c4b4f410-config\") pod \"route-controller-manager-56449bd9fb-src7z\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.362043 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86b6d593-e53b-47ad-9de5-30d2c4b4f410-client-ca\") pod \"route-controller-manager-56449bd9fb-src7z\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.362103 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v569s\" (UniqueName: \"kubernetes.io/projected/f9d96dcf-b094-485c-8636-401ccc71e918-kube-api-access-v569s\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.362115 4620 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.362124 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d96dcf-b094-485c-8636-401ccc71e918-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.362133 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.362142 4620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9d96dcf-b094-485c-8636-401ccc71e918-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.363524 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86b6d593-e53b-47ad-9de5-30d2c4b4f410-client-ca\") pod \"route-controller-manager-56449bd9fb-src7z\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.363931 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86b6d593-e53b-47ad-9de5-30d2c4b4f410-config\") pod \"route-controller-manager-56449bd9fb-src7z\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.367844 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86b6d593-e53b-47ad-9de5-30d2c4b4f410-serving-cert\") pod \"route-controller-manager-56449bd9fb-src7z\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.377337 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfpgp\" (UniqueName: \"kubernetes.io/projected/86b6d593-e53b-47ad-9de5-30d2c4b4f410-kube-api-access-xfpgp\") pod \"route-controller-manager-56449bd9fb-src7z\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.416475 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.602714 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z"] Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.652524 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" event={"ID":"f9d96dcf-b094-485c-8636-401ccc71e918","Type":"ContainerDied","Data":"321c7ce6490981b385b9380849cfd37e80cdc6c7998c962d10d93bb8b491b30c"} Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.652583 4620 scope.go:117] "RemoveContainer" containerID="6a85540471008882969c933ba22ea3df1c58905bfb7812da960ace1c8418b25e" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.652704 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2j567" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.656015 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.656362 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb" event={"ID":"58f39c40-ed87-43dd-90d0-d892d4a56375","Type":"ContainerDied","Data":"7d19f101c4f72540f3a8a430136a2ef23f3f8484262961f8c3c3886a9e57ac57"} Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.657475 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" event={"ID":"86b6d593-e53b-47ad-9de5-30d2c4b4f410","Type":"ContainerStarted","Data":"f4b83849cc5880f83dd4d0130e37cef1c075745d8aa47b5a9d0ac8cf87b79dfe"} Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.693457 4620 scope.go:117] "RemoveContainer" containerID="0c3ba8b41a42de2e08920d865a9072841f7a9e29e605ad2e035f4d7f8bc432ca" Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.708954 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2j567"] Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.720803 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2j567"] Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.724593 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb"] Jan 29 15:04:27 crc kubenswrapper[4620]: I0129 15:04:27.738023 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-phzrb"] Jan 29 15:04:28 crc kubenswrapper[4620]: I0129 15:04:28.664568 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" event={"ID":"86b6d593-e53b-47ad-9de5-30d2c4b4f410","Type":"ContainerStarted","Data":"f5004dd1f6a8c32aa729a79febb4ad4e8f08e606539f30ee328d585b522cce92"} Jan 29 15:04:28 crc kubenswrapper[4620]: I0129 15:04:28.665117 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:28 crc kubenswrapper[4620]: I0129 15:04:28.671600 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:28 crc kubenswrapper[4620]: I0129 15:04:28.680565 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" podStartSLOduration=12.680554206 podStartE2EDuration="12.680554206s" podCreationTimestamp="2026-01-29 15:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:04:28.677321552 +0000 UTC m=+209.290149217" watchObservedRunningTime="2026-01-29 15:04:28.680554206 +0000 UTC m=+209.293381851" Jan 29 15:04:28 crc kubenswrapper[4620]: E0129 15:04:28.875897 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lg725" podUID="2aaa7ad3-f4ef-4a15-993d-43166028f71b" Jan 29 15:04:28 crc kubenswrapper[4620]: I0129 15:04:28.881076 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58f39c40-ed87-43dd-90d0-d892d4a56375" path="/var/lib/kubelet/pods/58f39c40-ed87-43dd-90d0-d892d4a56375/volumes" Jan 29 15:04:28 crc kubenswrapper[4620]: I0129 15:04:28.881792 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9d96dcf-b094-485c-8636-401ccc71e918" path="/var/lib/kubelet/pods/f9d96dcf-b094-485c-8636-401ccc71e918/volumes" Jan 29 15:04:29 crc kubenswrapper[4620]: I0129 15:04:29.955034 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx"] Jan 29 15:04:29 crc kubenswrapper[4620]: E0129 15:04:29.955545 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d96dcf-b094-485c-8636-401ccc71e918" containerName="controller-manager" Jan 29 15:04:29 crc kubenswrapper[4620]: I0129 15:04:29.955563 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d96dcf-b094-485c-8636-401ccc71e918" containerName="controller-manager" Jan 29 15:04:29 crc kubenswrapper[4620]: I0129 15:04:29.955830 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d96dcf-b094-485c-8636-401ccc71e918" containerName="controller-manager" Jan 29 15:04:29 crc kubenswrapper[4620]: I0129 15:04:29.956282 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:29 crc kubenswrapper[4620]: I0129 15:04:29.959454 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:04:29 crc kubenswrapper[4620]: I0129 15:04:29.959727 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:04:29 crc kubenswrapper[4620]: I0129 15:04:29.965998 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx"] Jan 29 15:04:29 crc kubenswrapper[4620]: I0129 15:04:29.966714 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:04:29 crc kubenswrapper[4620]: I0129 15:04:29.971858 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:04:29 crc kubenswrapper[4620]: I0129 15:04:29.972110 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:04:29 crc kubenswrapper[4620]: I0129 15:04:29.972297 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:04:29 crc kubenswrapper[4620]: I0129 15:04:29.982654 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.098120 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-client-ca\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.098183 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-proxy-ca-bundles\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.098325 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f81965-804e-47b6-92d7-ab6b7841e874-serving-cert\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.098566 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-config\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.098788 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slm7q\" (UniqueName: \"kubernetes.io/projected/e2f81965-804e-47b6-92d7-ab6b7841e874-kube-api-access-slm7q\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.199668 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-client-ca\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.200040 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-proxy-ca-bundles\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.200071 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f81965-804e-47b6-92d7-ab6b7841e874-serving-cert\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.200129 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-config\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.200219 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slm7q\" (UniqueName: \"kubernetes.io/projected/e2f81965-804e-47b6-92d7-ab6b7841e874-kube-api-access-slm7q\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.200732 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-client-ca\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.201290 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-proxy-ca-bundles\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.201784 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-config\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.209645 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f81965-804e-47b6-92d7-ab6b7841e874-serving-cert\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.217016 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slm7q\" (UniqueName: \"kubernetes.io/projected/e2f81965-804e-47b6-92d7-ab6b7841e874-kube-api-access-slm7q\") pod \"controller-manager-5bbcb9877c-7ljlx\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.287271 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.488218 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx"] Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.679637 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" event={"ID":"e2f81965-804e-47b6-92d7-ab6b7841e874","Type":"ContainerStarted","Data":"e20887f3d8d96a77cbb80f75f23925b63a6ad11e85618a9ae42cfb4f307c6862"} Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.679700 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" event={"ID":"e2f81965-804e-47b6-92d7-ab6b7841e874","Type":"ContainerStarted","Data":"67812e0a9e7110dcde841d03cee16e3433272587aa88c6750b712a15b900f4cf"} Jan 29 15:04:30 crc kubenswrapper[4620]: I0129 15:04:30.701549 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" podStartSLOduration=14.70152944 podStartE2EDuration="14.70152944s" podCreationTimestamp="2026-01-29 15:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:04:30.697663875 +0000 UTC m=+211.310491530" watchObservedRunningTime="2026-01-29 15:04:30.70152944 +0000 UTC m=+211.314357075" Jan 29 15:04:31 crc kubenswrapper[4620]: I0129 15:04:31.087066 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cl5pv" Jan 29 15:04:31 crc kubenswrapper[4620]: I0129 15:04:31.684524 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:31 crc kubenswrapper[4620]: I0129 15:04:31.689539 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:34 crc kubenswrapper[4620]: I0129 15:04:34.111301 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:04:34 crc kubenswrapper[4620]: I0129 15:04:34.112946 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:04:35 crc kubenswrapper[4620]: I0129 15:04:35.848134 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8q7cm"] Jan 29 15:04:36 crc kubenswrapper[4620]: I0129 15:04:36.434046 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx"] Jan 29 15:04:36 crc kubenswrapper[4620]: I0129 15:04:36.434282 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" podUID="e2f81965-804e-47b6-92d7-ab6b7841e874" containerName="controller-manager" containerID="cri-o://e20887f3d8d96a77cbb80f75f23925b63a6ad11e85618a9ae42cfb4f307c6862" gracePeriod=30 Jan 29 15:04:36 crc kubenswrapper[4620]: I0129 15:04:36.529704 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z"] Jan 29 15:04:36 crc kubenswrapper[4620]: I0129 15:04:36.529947 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" podUID="86b6d593-e53b-47ad-9de5-30d2c4b4f410" containerName="route-controller-manager" containerID="cri-o://f5004dd1f6a8c32aa729a79febb4ad4e8f08e606539f30ee328d585b522cce92" gracePeriod=30 Jan 29 15:04:36 crc kubenswrapper[4620]: I0129 15:04:36.730516 4620 generic.go:334] "Generic (PLEG): container finished" podID="86b6d593-e53b-47ad-9de5-30d2c4b4f410" containerID="f5004dd1f6a8c32aa729a79febb4ad4e8f08e606539f30ee328d585b522cce92" exitCode=0 Jan 29 15:04:36 crc kubenswrapper[4620]: I0129 15:04:36.730583 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" event={"ID":"86b6d593-e53b-47ad-9de5-30d2c4b4f410","Type":"ContainerDied","Data":"f5004dd1f6a8c32aa729a79febb4ad4e8f08e606539f30ee328d585b522cce92"} Jan 29 15:04:36 crc kubenswrapper[4620]: I0129 15:04:36.738432 4620 generic.go:334] "Generic (PLEG): container finished" podID="e2f81965-804e-47b6-92d7-ab6b7841e874" containerID="e20887f3d8d96a77cbb80f75f23925b63a6ad11e85618a9ae42cfb4f307c6862" exitCode=0 Jan 29 15:04:36 crc kubenswrapper[4620]: I0129 15:04:36.738504 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" event={"ID":"e2f81965-804e-47b6-92d7-ab6b7841e874","Type":"ContainerDied","Data":"e20887f3d8d96a77cbb80f75f23925b63a6ad11e85618a9ae42cfb4f307c6862"} Jan 29 15:04:36 crc kubenswrapper[4620]: E0129 15:04:36.874060 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xwr75" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.088289 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.104871 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86b6d593-e53b-47ad-9de5-30d2c4b4f410-serving-cert\") pod \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.104944 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfpgp\" (UniqueName: \"kubernetes.io/projected/86b6d593-e53b-47ad-9de5-30d2c4b4f410-kube-api-access-xfpgp\") pod \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.104969 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86b6d593-e53b-47ad-9de5-30d2c4b4f410-client-ca\") pod \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.105039 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86b6d593-e53b-47ad-9de5-30d2c4b4f410-config\") pod \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\" (UID: \"86b6d593-e53b-47ad-9de5-30d2c4b4f410\") " Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.105901 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86b6d593-e53b-47ad-9de5-30d2c4b4f410-config" (OuterVolumeSpecName: "config") pod "86b6d593-e53b-47ad-9de5-30d2c4b4f410" (UID: "86b6d593-e53b-47ad-9de5-30d2c4b4f410"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.106371 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86b6d593-e53b-47ad-9de5-30d2c4b4f410-client-ca" (OuterVolumeSpecName: "client-ca") pod "86b6d593-e53b-47ad-9de5-30d2c4b4f410" (UID: "86b6d593-e53b-47ad-9de5-30d2c4b4f410"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.110974 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b6d593-e53b-47ad-9de5-30d2c4b4f410-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "86b6d593-e53b-47ad-9de5-30d2c4b4f410" (UID: "86b6d593-e53b-47ad-9de5-30d2c4b4f410"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.111062 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b6d593-e53b-47ad-9de5-30d2c4b4f410-kube-api-access-xfpgp" (OuterVolumeSpecName: "kube-api-access-xfpgp") pod "86b6d593-e53b-47ad-9de5-30d2c4b4f410" (UID: "86b6d593-e53b-47ad-9de5-30d2c4b4f410"). InnerVolumeSpecName "kube-api-access-xfpgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.133810 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.206267 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f81965-804e-47b6-92d7-ab6b7841e874-serving-cert\") pod \"e2f81965-804e-47b6-92d7-ab6b7841e874\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.206332 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-client-ca\") pod \"e2f81965-804e-47b6-92d7-ab6b7841e874\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.206377 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-proxy-ca-bundles\") pod \"e2f81965-804e-47b6-92d7-ab6b7841e874\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.206436 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slm7q\" (UniqueName: \"kubernetes.io/projected/e2f81965-804e-47b6-92d7-ab6b7841e874-kube-api-access-slm7q\") pod \"e2f81965-804e-47b6-92d7-ab6b7841e874\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.206481 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-config\") pod \"e2f81965-804e-47b6-92d7-ab6b7841e874\" (UID: \"e2f81965-804e-47b6-92d7-ab6b7841e874\") " Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.206718 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86b6d593-e53b-47ad-9de5-30d2c4b4f410-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.206734 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfpgp\" (UniqueName: \"kubernetes.io/projected/86b6d593-e53b-47ad-9de5-30d2c4b4f410-kube-api-access-xfpgp\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.206748 4620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86b6d593-e53b-47ad-9de5-30d2c4b4f410-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.206775 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86b6d593-e53b-47ad-9de5-30d2c4b4f410-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.207295 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-config" (OuterVolumeSpecName: "config") pod "e2f81965-804e-47b6-92d7-ab6b7841e874" (UID: "e2f81965-804e-47b6-92d7-ab6b7841e874"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.207863 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e2f81965-804e-47b6-92d7-ab6b7841e874" (UID: "e2f81965-804e-47b6-92d7-ab6b7841e874"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.207991 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-client-ca" (OuterVolumeSpecName: "client-ca") pod "e2f81965-804e-47b6-92d7-ab6b7841e874" (UID: "e2f81965-804e-47b6-92d7-ab6b7841e874"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.209709 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f81965-804e-47b6-92d7-ab6b7841e874-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e2f81965-804e-47b6-92d7-ab6b7841e874" (UID: "e2f81965-804e-47b6-92d7-ab6b7841e874"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.209941 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f81965-804e-47b6-92d7-ab6b7841e874-kube-api-access-slm7q" (OuterVolumeSpecName: "kube-api-access-slm7q") pod "e2f81965-804e-47b6-92d7-ab6b7841e874" (UID: "e2f81965-804e-47b6-92d7-ab6b7841e874"). InnerVolumeSpecName "kube-api-access-slm7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.308243 4620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.308288 4620 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.308302 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slm7q\" (UniqueName: \"kubernetes.io/projected/e2f81965-804e-47b6-92d7-ab6b7841e874-kube-api-access-slm7q\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.308312 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f81965-804e-47b6-92d7-ab6b7841e874-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.308322 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2f81965-804e-47b6-92d7-ab6b7841e874-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.744657 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" event={"ID":"e2f81965-804e-47b6-92d7-ab6b7841e874","Type":"ContainerDied","Data":"67812e0a9e7110dcde841d03cee16e3433272587aa88c6750b712a15b900f4cf"} Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.744717 4620 scope.go:117] "RemoveContainer" containerID="e20887f3d8d96a77cbb80f75f23925b63a6ad11e85618a9ae42cfb4f307c6862" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.744870 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.747890 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" event={"ID":"86b6d593-e53b-47ad-9de5-30d2c4b4f410","Type":"ContainerDied","Data":"f4b83849cc5880f83dd4d0130e37cef1c075745d8aa47b5a9d0ac8cf87b79dfe"} Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.747986 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.762860 4620 scope.go:117] "RemoveContainer" containerID="f5004dd1f6a8c32aa729a79febb4ad4e8f08e606539f30ee328d585b522cce92" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.777589 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z"] Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.783323 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56449bd9fb-src7z"] Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.802549 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx"] Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.806617 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bbcb9877c-7ljlx"] Jan 29 15:04:37 crc kubenswrapper[4620]: E0129 15:04:37.874428 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mzcfs" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" Jan 29 15:04:37 crc kubenswrapper[4620]: E0129 15:04:37.875116 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jn68n" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.963963 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn"] Jan 29 15:04:37 crc kubenswrapper[4620]: E0129 15:04:37.964181 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b6d593-e53b-47ad-9de5-30d2c4b4f410" containerName="route-controller-manager" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.964193 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b6d593-e53b-47ad-9de5-30d2c4b4f410" containerName="route-controller-manager" Jan 29 15:04:37 crc kubenswrapper[4620]: E0129 15:04:37.964212 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f81965-804e-47b6-92d7-ab6b7841e874" containerName="controller-manager" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.964218 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f81965-804e-47b6-92d7-ab6b7841e874" containerName="controller-manager" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.964306 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b6d593-e53b-47ad-9de5-30d2c4b4f410" containerName="route-controller-manager" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.964323 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f81965-804e-47b6-92d7-ab6b7841e874" containerName="controller-manager" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.964769 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.967909 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.968213 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.968612 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.968977 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.969136 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.969224 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86cb85f869-8n66x"] Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.969281 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.970001 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.978428 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.978590 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.978645 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.978805 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.981726 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.982008 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.983721 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn"] Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.983778 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:04:37 crc kubenswrapper[4620]: I0129 15:04:37.986870 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86cb85f869-8n66x"] Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.016576 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djg8f\" (UniqueName: \"kubernetes.io/projected/05e141a1-4cc9-4674-9e81-ea51e14590ee-kube-api-access-djg8f\") pod \"route-controller-manager-6596989664-5rqsn\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.016636 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-client-ca\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.016675 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e141a1-4cc9-4674-9e81-ea51e14590ee-config\") pod \"route-controller-manager-6596989664-5rqsn\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.016701 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-proxy-ca-bundles\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.016719 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhs2d\" (UniqueName: \"kubernetes.io/projected/1b040de7-ae88-4bea-926d-2e1c27575c6f-kube-api-access-fhs2d\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.016788 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05e141a1-4cc9-4674-9e81-ea51e14590ee-client-ca\") pod \"route-controller-manager-6596989664-5rqsn\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.016828 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-config\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.016846 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05e141a1-4cc9-4674-9e81-ea51e14590ee-serving-cert\") pod \"route-controller-manager-6596989664-5rqsn\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.016879 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b040de7-ae88-4bea-926d-2e1c27575c6f-serving-cert\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.117530 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-client-ca\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.117873 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e141a1-4cc9-4674-9e81-ea51e14590ee-config\") pod \"route-controller-manager-6596989664-5rqsn\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.118034 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-proxy-ca-bundles\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.118887 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e141a1-4cc9-4674-9e81-ea51e14590ee-config\") pod \"route-controller-manager-6596989664-5rqsn\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.118889 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhs2d\" (UniqueName: \"kubernetes.io/projected/1b040de7-ae88-4bea-926d-2e1c27575c6f-kube-api-access-fhs2d\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.119299 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-client-ca\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.119577 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05e141a1-4cc9-4674-9e81-ea51e14590ee-client-ca\") pod \"route-controller-manager-6596989664-5rqsn\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.121273 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-config\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.122144 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05e141a1-4cc9-4674-9e81-ea51e14590ee-serving-cert\") pod \"route-controller-manager-6596989664-5rqsn\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.122380 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b040de7-ae88-4bea-926d-2e1c27575c6f-serving-cert\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.122629 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djg8f\" (UniqueName: \"kubernetes.io/projected/05e141a1-4cc9-4674-9e81-ea51e14590ee-kube-api-access-djg8f\") pod \"route-controller-manager-6596989664-5rqsn\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.120852 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05e141a1-4cc9-4674-9e81-ea51e14590ee-client-ca\") pod \"route-controller-manager-6596989664-5rqsn\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.123102 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-config\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.123682 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-proxy-ca-bundles\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.127607 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05e141a1-4cc9-4674-9e81-ea51e14590ee-serving-cert\") pod \"route-controller-manager-6596989664-5rqsn\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.137803 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djg8f\" (UniqueName: \"kubernetes.io/projected/05e141a1-4cc9-4674-9e81-ea51e14590ee-kube-api-access-djg8f\") pod \"route-controller-manager-6596989664-5rqsn\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.141059 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b040de7-ae88-4bea-926d-2e1c27575c6f-serving-cert\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.143468 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhs2d\" (UniqueName: \"kubernetes.io/projected/1b040de7-ae88-4bea-926d-2e1c27575c6f-kube-api-access-fhs2d\") pod \"controller-manager-86cb85f869-8n66x\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.287283 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.298557 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.508911 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn"] Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.849430 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86cb85f869-8n66x"] Jan 29 15:04:38 crc kubenswrapper[4620]: W0129 15:04:38.858290 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b040de7_ae88_4bea_926d_2e1c27575c6f.slice/crio-1e11913cd4914ad7d8f03f3b363447db688cb927e30f02cacc2184932f2f6d37 WatchSource:0}: Error finding container 1e11913cd4914ad7d8f03f3b363447db688cb927e30f02cacc2184932f2f6d37: Status 404 returned error can't find the container with id 1e11913cd4914ad7d8f03f3b363447db688cb927e30f02cacc2184932f2f6d37 Jan 29 15:04:38 crc kubenswrapper[4620]: E0129 15:04:38.873901 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l8bmh" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.886777 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86b6d593-e53b-47ad-9de5-30d2c4b4f410" path="/var/lib/kubelet/pods/86b6d593-e53b-47ad-9de5-30d2c4b4f410/volumes" Jan 29 15:04:38 crc kubenswrapper[4620]: I0129 15:04:38.887690 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f81965-804e-47b6-92d7-ab6b7841e874" path="/var/lib/kubelet/pods/e2f81965-804e-47b6-92d7-ab6b7841e874/volumes" Jan 29 15:04:39 crc kubenswrapper[4620]: I0129 15:04:39.797729 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" event={"ID":"1b040de7-ae88-4bea-926d-2e1c27575c6f","Type":"ContainerStarted","Data":"10340dd163b4399d1ce5390e4a6d6027cd66d0aac45e2974d404aeeb702a555f"} Jan 29 15:04:39 crc kubenswrapper[4620]: I0129 15:04:39.798382 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" event={"ID":"1b040de7-ae88-4bea-926d-2e1c27575c6f","Type":"ContainerStarted","Data":"1e11913cd4914ad7d8f03f3b363447db688cb927e30f02cacc2184932f2f6d37"} Jan 29 15:04:39 crc kubenswrapper[4620]: I0129 15:04:39.798804 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" event={"ID":"05e141a1-4cc9-4674-9e81-ea51e14590ee","Type":"ContainerStarted","Data":"2a6ab209bd966efc9fd72b7e15606d9abdcae91a4d0ea8bbed37d6e208839aae"} Jan 29 15:04:39 crc kubenswrapper[4620]: I0129 15:04:39.798920 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" event={"ID":"05e141a1-4cc9-4674-9e81-ea51e14590ee","Type":"ContainerStarted","Data":"c163042ca43e0bade83f3a5e88906f7e5a95b09491f9d15d21356da5717e5e18"} Jan 29 15:04:39 crc kubenswrapper[4620]: I0129 15:04:39.799802 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:39 crc kubenswrapper[4620]: I0129 15:04:39.810907 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:39 crc kubenswrapper[4620]: I0129 15:04:39.833701 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" podStartSLOduration=3.83367609 podStartE2EDuration="3.83367609s" podCreationTimestamp="2026-01-29 15:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:04:39.816054582 +0000 UTC m=+220.428882237" watchObservedRunningTime="2026-01-29 15:04:39.83367609 +0000 UTC m=+220.446503755" Jan 29 15:04:39 crc kubenswrapper[4620]: E0129 15:04:39.875930 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tqw2q" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" Jan 29 15:04:39 crc kubenswrapper[4620]: E0129 15:04:39.876101 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-669zb" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" Jan 29 15:04:40 crc kubenswrapper[4620]: I0129 15:04:40.819006 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" podStartSLOduration=4.818987275 podStartE2EDuration="4.818987275s" podCreationTimestamp="2026-01-29 15:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:04:40.81790854 +0000 UTC m=+221.430736205" watchObservedRunningTime="2026-01-29 15:04:40.818987275 +0000 UTC m=+221.431814921" Jan 29 15:04:41 crc kubenswrapper[4620]: I0129 15:04:41.104534 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:04:41 crc kubenswrapper[4620]: I0129 15:04:41.105265 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:04:41 crc kubenswrapper[4620]: I0129 15:04:41.109588 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 15:04:41 crc kubenswrapper[4620]: I0129 15:04:41.109935 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 15:04:41 crc kubenswrapper[4620]: I0129 15:04:41.112228 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:04:41 crc kubenswrapper[4620]: I0129 15:04:41.163093 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a354603d-99be-4419-9d5b-184d5ff4e88b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a354603d-99be-4419-9d5b-184d5ff4e88b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:04:41 crc kubenswrapper[4620]: I0129 15:04:41.163192 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a354603d-99be-4419-9d5b-184d5ff4e88b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a354603d-99be-4419-9d5b-184d5ff4e88b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:04:41 crc kubenswrapper[4620]: I0129 15:04:41.264680 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a354603d-99be-4419-9d5b-184d5ff4e88b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a354603d-99be-4419-9d5b-184d5ff4e88b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:04:41 crc kubenswrapper[4620]: I0129 15:04:41.264775 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a354603d-99be-4419-9d5b-184d5ff4e88b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a354603d-99be-4419-9d5b-184d5ff4e88b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:04:41 crc kubenswrapper[4620]: I0129 15:04:41.264952 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a354603d-99be-4419-9d5b-184d5ff4e88b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a354603d-99be-4419-9d5b-184d5ff4e88b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:04:41 crc kubenswrapper[4620]: I0129 15:04:41.283998 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a354603d-99be-4419-9d5b-184d5ff4e88b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a354603d-99be-4419-9d5b-184d5ff4e88b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:04:41 crc kubenswrapper[4620]: I0129 15:04:41.430812 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:04:41 crc kubenswrapper[4620]: I0129 15:04:41.855876 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:04:41 crc kubenswrapper[4620]: E0129 15:04:41.873288 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mn8xm" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" Jan 29 15:04:42 crc kubenswrapper[4620]: E0129 15:04:42.710216 4620 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-poda354603d_99be_4419_9d5b_184d5ff4e88b.slice/crio-conmon-575348cf283918d018223d53d5bd3b0a0a188f95199d8623701dcc527f70955b.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:04:42 crc kubenswrapper[4620]: I0129 15:04:42.812315 4620 generic.go:334] "Generic (PLEG): container finished" podID="a354603d-99be-4419-9d5b-184d5ff4e88b" containerID="575348cf283918d018223d53d5bd3b0a0a188f95199d8623701dcc527f70955b" exitCode=0 Jan 29 15:04:42 crc kubenswrapper[4620]: I0129 15:04:42.812360 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a354603d-99be-4419-9d5b-184d5ff4e88b","Type":"ContainerDied","Data":"575348cf283918d018223d53d5bd3b0a0a188f95199d8623701dcc527f70955b"} Jan 29 15:04:42 crc kubenswrapper[4620]: I0129 15:04:42.812386 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a354603d-99be-4419-9d5b-184d5ff4e88b","Type":"ContainerStarted","Data":"737fda0a805dd6c815994475fbd6a3b090b0a3a0b445a8dc2f361322a910fac9"} Jan 29 15:04:44 crc kubenswrapper[4620]: E0129 15:04:44.008939 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:04:44 crc kubenswrapper[4620]: E0129 15:04:44.009667 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzm4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lg725_openshift-marketplace(2aaa7ad3-f4ef-4a15-993d-43166028f71b): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:44 crc kubenswrapper[4620]: E0129 15:04:44.011147 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-lg725" podUID="2aaa7ad3-f4ef-4a15-993d-43166028f71b" Jan 29 15:04:44 crc kubenswrapper[4620]: I0129 15:04:44.138740 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:04:44 crc kubenswrapper[4620]: I0129 15:04:44.204397 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a354603d-99be-4419-9d5b-184d5ff4e88b-kubelet-dir\") pod \"a354603d-99be-4419-9d5b-184d5ff4e88b\" (UID: \"a354603d-99be-4419-9d5b-184d5ff4e88b\") " Jan 29 15:04:44 crc kubenswrapper[4620]: I0129 15:04:44.204547 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a354603d-99be-4419-9d5b-184d5ff4e88b-kube-api-access\") pod \"a354603d-99be-4419-9d5b-184d5ff4e88b\" (UID: \"a354603d-99be-4419-9d5b-184d5ff4e88b\") " Jan 29 15:04:44 crc kubenswrapper[4620]: I0129 15:04:44.206008 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a354603d-99be-4419-9d5b-184d5ff4e88b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a354603d-99be-4419-9d5b-184d5ff4e88b" (UID: "a354603d-99be-4419-9d5b-184d5ff4e88b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:04:44 crc kubenswrapper[4620]: I0129 15:04:44.211896 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a354603d-99be-4419-9d5b-184d5ff4e88b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a354603d-99be-4419-9d5b-184d5ff4e88b" (UID: "a354603d-99be-4419-9d5b-184d5ff4e88b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:04:44 crc kubenswrapper[4620]: I0129 15:04:44.305671 4620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a354603d-99be-4419-9d5b-184d5ff4e88b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:44 crc kubenswrapper[4620]: I0129 15:04:44.305714 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a354603d-99be-4419-9d5b-184d5ff4e88b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:44 crc kubenswrapper[4620]: I0129 15:04:44.826788 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a354603d-99be-4419-9d5b-184d5ff4e88b","Type":"ContainerDied","Data":"737fda0a805dd6c815994475fbd6a3b090b0a3a0b445a8dc2f361322a910fac9"} Jan 29 15:04:44 crc kubenswrapper[4620]: I0129 15:04:44.827118 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="737fda0a805dd6c815994475fbd6a3b090b0a3a0b445a8dc2f361322a910fac9" Jan 29 15:04:44 crc kubenswrapper[4620]: I0129 15:04:44.827185 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.300467 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.303472 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:04:48 crc kubenswrapper[4620]: E0129 15:04:48.303690 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a354603d-99be-4419-9d5b-184d5ff4e88b" containerName="pruner" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.303708 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="a354603d-99be-4419-9d5b-184d5ff4e88b" containerName="pruner" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.303850 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="a354603d-99be-4419-9d5b-184d5ff4e88b" containerName="pruner" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.304181 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.308466 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.308659 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.313351 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.318941 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.356640 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a169668-f370-4ff5-bea7-162091cf8c49-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7a169668-f370-4ff5-bea7-162091cf8c49\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.357583 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a169668-f370-4ff5-bea7-162091cf8c49-kube-api-access\") pod \"installer-9-crc\" (UID: \"7a169668-f370-4ff5-bea7-162091cf8c49\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.357648 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a169668-f370-4ff5-bea7-162091cf8c49-var-lock\") pod \"installer-9-crc\" (UID: \"7a169668-f370-4ff5-bea7-162091cf8c49\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.458697 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a169668-f370-4ff5-bea7-162091cf8c49-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7a169668-f370-4ff5-bea7-162091cf8c49\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.459036 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a169668-f370-4ff5-bea7-162091cf8c49-kube-api-access\") pod \"installer-9-crc\" (UID: \"7a169668-f370-4ff5-bea7-162091cf8c49\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.458851 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a169668-f370-4ff5-bea7-162091cf8c49-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7a169668-f370-4ff5-bea7-162091cf8c49\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.459063 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a169668-f370-4ff5-bea7-162091cf8c49-var-lock\") pod \"installer-9-crc\" (UID: \"7a169668-f370-4ff5-bea7-162091cf8c49\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.459120 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a169668-f370-4ff5-bea7-162091cf8c49-var-lock\") pod \"installer-9-crc\" (UID: \"7a169668-f370-4ff5-bea7-162091cf8c49\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.476122 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a169668-f370-4ff5-bea7-162091cf8c49-kube-api-access\") pod \"installer-9-crc\" (UID: \"7a169668-f370-4ff5-bea7-162091cf8c49\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:04:48 crc kubenswrapper[4620]: I0129 15:04:48.629744 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:04:49 crc kubenswrapper[4620]: I0129 15:04:49.074525 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:04:49 crc kubenswrapper[4620]: I0129 15:04:49.858331 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7a169668-f370-4ff5-bea7-162091cf8c49","Type":"ContainerStarted","Data":"bd9d886a85fb01f580ee61d82b9b21cffbf8be0d99648d1b10e8d6d2996d5312"} Jan 29 15:04:49 crc kubenswrapper[4620]: I0129 15:04:49.858622 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7a169668-f370-4ff5-bea7-162091cf8c49","Type":"ContainerStarted","Data":"4a62ba3b776558d9aac9cc1e2c878503046453130ef1b188ec87b56d6039f3cd"} Jan 29 15:04:49 crc kubenswrapper[4620]: I0129 15:04:49.878505 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.8784753429999999 podStartE2EDuration="1.878475343s" podCreationTimestamp="2026-01-29 15:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:04:49.872946335 +0000 UTC m=+230.485774000" watchObservedRunningTime="2026-01-29 15:04:49.878475343 +0000 UTC m=+230.491303108" Jan 29 15:04:51 crc kubenswrapper[4620]: E0129 15:04:51.012641 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:04:51 crc kubenswrapper[4620]: E0129 15:04:51.012776 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqh7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tqw2q_openshift-marketplace(a7f8b2b8-8396-425a-94ac-e66deddac937): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:51 crc kubenswrapper[4620]: E0129 15:04:51.014697 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-tqw2q" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" Jan 29 15:04:51 crc kubenswrapper[4620]: E0129 15:04:51.998007 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:04:51 crc kubenswrapper[4620]: E0129 15:04:51.998149 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5k7hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jn68n_openshift-marketplace(67739a3a-d009-4685-a79a-aaa81f5b2daf): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:51 crc kubenswrapper[4620]: E0129 15:04:51.998319 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:04:51 crc kubenswrapper[4620]: E0129 15:04:51.998394 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xbrp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xwr75_openshift-marketplace(2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:51 crc kubenswrapper[4620]: E0129 15:04:51.999176 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:04:51 crc kubenswrapper[4620]: E0129 15:04:51.999215 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-jn68n" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" Jan 29 15:04:51 crc kubenswrapper[4620]: E0129 15:04:51.999247 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6ws5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l8bmh_openshift-marketplace(0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:52 crc kubenswrapper[4620]: E0129 15:04:52.000470 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-xwr75" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" Jan 29 15:04:52 crc kubenswrapper[4620]: E0129 15:04:52.000508 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-l8bmh" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" Jan 29 15:04:53 crc kubenswrapper[4620]: E0129 15:04:53.016657 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:04:53 crc kubenswrapper[4620]: E0129 15:04:53.016821 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8jkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-669zb_openshift-marketplace(60d03ebd-82d4-4ffe-895e-2909c15480d7): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:53 crc kubenswrapper[4620]: E0129 15:04:53.018008 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-669zb" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" Jan 29 15:04:53 crc kubenswrapper[4620]: E0129 15:04:53.035811 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:04:53 crc kubenswrapper[4620]: E0129 15:04:53.036214 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nw6hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mzcfs_openshift-marketplace(d401dd8c-5cfc-4cbd-92de-bb9896e90ea0): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:53 crc kubenswrapper[4620]: E0129 15:04:53.037593 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-mzcfs" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" Jan 29 15:04:56 crc kubenswrapper[4620]: E0129 15:04:56.025219 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:04:56 crc kubenswrapper[4620]: E0129 15:04:56.025545 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktvhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mn8xm_openshift-marketplace(8ced9400-3ad4-4717-ab18-327d5e40daa4): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:04:56 crc kubenswrapper[4620]: E0129 15:04:56.026769 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-mn8xm" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" Jan 29 15:04:56 crc kubenswrapper[4620]: I0129 15:04:56.451639 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86cb85f869-8n66x"] Jan 29 15:04:56 crc kubenswrapper[4620]: I0129 15:04:56.451906 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" podUID="1b040de7-ae88-4bea-926d-2e1c27575c6f" containerName="controller-manager" containerID="cri-o://10340dd163b4399d1ce5390e4a6d6027cd66d0aac45e2974d404aeeb702a555f" gracePeriod=30 Jan 29 15:04:56 crc kubenswrapper[4620]: I0129 15:04:56.466812 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn"] Jan 29 15:04:56 crc kubenswrapper[4620]: I0129 15:04:56.467133 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" podUID="05e141a1-4cc9-4674-9e81-ea51e14590ee" containerName="route-controller-manager" containerID="cri-o://2a6ab209bd966efc9fd72b7e15606d9abdcae91a4d0ea8bbed37d6e208839aae" gracePeriod=30 Jan 29 15:04:56 crc kubenswrapper[4620]: I0129 15:04:56.891103 4620 generic.go:334] "Generic (PLEG): container finished" podID="1b040de7-ae88-4bea-926d-2e1c27575c6f" containerID="10340dd163b4399d1ce5390e4a6d6027cd66d0aac45e2974d404aeeb702a555f" exitCode=0 Jan 29 15:04:56 crc kubenswrapper[4620]: I0129 15:04:56.891602 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" event={"ID":"1b040de7-ae88-4bea-926d-2e1c27575c6f","Type":"ContainerDied","Data":"10340dd163b4399d1ce5390e4a6d6027cd66d0aac45e2974d404aeeb702a555f"} Jan 29 15:04:56 crc kubenswrapper[4620]: I0129 15:04:56.894032 4620 generic.go:334] "Generic (PLEG): container finished" podID="05e141a1-4cc9-4674-9e81-ea51e14590ee" containerID="2a6ab209bd966efc9fd72b7e15606d9abdcae91a4d0ea8bbed37d6e208839aae" exitCode=0 Jan 29 15:04:56 crc kubenswrapper[4620]: I0129 15:04:56.894080 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" event={"ID":"05e141a1-4cc9-4674-9e81-ea51e14590ee","Type":"ContainerDied","Data":"2a6ab209bd966efc9fd72b7e15606d9abdcae91a4d0ea8bbed37d6e208839aae"} Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.024291 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.074167 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e141a1-4cc9-4674-9e81-ea51e14590ee-config\") pod \"05e141a1-4cc9-4674-9e81-ea51e14590ee\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.074216 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djg8f\" (UniqueName: \"kubernetes.io/projected/05e141a1-4cc9-4674-9e81-ea51e14590ee-kube-api-access-djg8f\") pod \"05e141a1-4cc9-4674-9e81-ea51e14590ee\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.074286 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05e141a1-4cc9-4674-9e81-ea51e14590ee-serving-cert\") pod \"05e141a1-4cc9-4674-9e81-ea51e14590ee\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.074356 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05e141a1-4cc9-4674-9e81-ea51e14590ee-client-ca\") pod \"05e141a1-4cc9-4674-9e81-ea51e14590ee\" (UID: \"05e141a1-4cc9-4674-9e81-ea51e14590ee\") " Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.074956 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05e141a1-4cc9-4674-9e81-ea51e14590ee-client-ca" (OuterVolumeSpecName: "client-ca") pod "05e141a1-4cc9-4674-9e81-ea51e14590ee" (UID: "05e141a1-4cc9-4674-9e81-ea51e14590ee"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.075024 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05e141a1-4cc9-4674-9e81-ea51e14590ee-config" (OuterVolumeSpecName: "config") pod "05e141a1-4cc9-4674-9e81-ea51e14590ee" (UID: "05e141a1-4cc9-4674-9e81-ea51e14590ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.076301 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.079951 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05e141a1-4cc9-4674-9e81-ea51e14590ee-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "05e141a1-4cc9-4674-9e81-ea51e14590ee" (UID: "05e141a1-4cc9-4674-9e81-ea51e14590ee"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.080093 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e141a1-4cc9-4674-9e81-ea51e14590ee-kube-api-access-djg8f" (OuterVolumeSpecName: "kube-api-access-djg8f") pod "05e141a1-4cc9-4674-9e81-ea51e14590ee" (UID: "05e141a1-4cc9-4674-9e81-ea51e14590ee"). InnerVolumeSpecName "kube-api-access-djg8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.175374 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b040de7-ae88-4bea-926d-2e1c27575c6f-serving-cert\") pod \"1b040de7-ae88-4bea-926d-2e1c27575c6f\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.175430 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-config\") pod \"1b040de7-ae88-4bea-926d-2e1c27575c6f\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.175466 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-client-ca\") pod \"1b040de7-ae88-4bea-926d-2e1c27575c6f\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.175515 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-proxy-ca-bundles\") pod \"1b040de7-ae88-4bea-926d-2e1c27575c6f\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.176171 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-client-ca" (OuterVolumeSpecName: "client-ca") pod "1b040de7-ae88-4bea-926d-2e1c27575c6f" (UID: "1b040de7-ae88-4bea-926d-2e1c27575c6f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.176194 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1b040de7-ae88-4bea-926d-2e1c27575c6f" (UID: "1b040de7-ae88-4bea-926d-2e1c27575c6f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.176279 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-config" (OuterVolumeSpecName: "config") pod "1b040de7-ae88-4bea-926d-2e1c27575c6f" (UID: "1b040de7-ae88-4bea-926d-2e1c27575c6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.176279 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhs2d\" (UniqueName: \"kubernetes.io/projected/1b040de7-ae88-4bea-926d-2e1c27575c6f-kube-api-access-fhs2d\") pod \"1b040de7-ae88-4bea-926d-2e1c27575c6f\" (UID: \"1b040de7-ae88-4bea-926d-2e1c27575c6f\") " Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.176866 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e141a1-4cc9-4674-9e81-ea51e14590ee-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.176894 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djg8f\" (UniqueName: \"kubernetes.io/projected/05e141a1-4cc9-4674-9e81-ea51e14590ee-kube-api-access-djg8f\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.176908 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05e141a1-4cc9-4674-9e81-ea51e14590ee-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.176920 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.176930 4620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.176940 4620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05e141a1-4cc9-4674-9e81-ea51e14590ee-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.176951 4620 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b040de7-ae88-4bea-926d-2e1c27575c6f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.178160 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b040de7-ae88-4bea-926d-2e1c27575c6f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1b040de7-ae88-4bea-926d-2e1c27575c6f" (UID: "1b040de7-ae88-4bea-926d-2e1c27575c6f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.178866 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b040de7-ae88-4bea-926d-2e1c27575c6f-kube-api-access-fhs2d" (OuterVolumeSpecName: "kube-api-access-fhs2d") pod "1b040de7-ae88-4bea-926d-2e1c27575c6f" (UID: "1b040de7-ae88-4bea-926d-2e1c27575c6f"). InnerVolumeSpecName "kube-api-access-fhs2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.278428 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhs2d\" (UniqueName: \"kubernetes.io/projected/1b040de7-ae88-4bea-926d-2e1c27575c6f-kube-api-access-fhs2d\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.278532 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b040de7-ae88-4bea-926d-2e1c27575c6f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:04:57 crc kubenswrapper[4620]: E0129 15:04:57.874363 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lg725" podUID="2aaa7ad3-f4ef-4a15-993d-43166028f71b" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.899986 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.899963 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn" event={"ID":"05e141a1-4cc9-4674-9e81-ea51e14590ee","Type":"ContainerDied","Data":"c163042ca43e0bade83f3a5e88906f7e5a95b09491f9d15d21356da5717e5e18"} Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.900119 4620 scope.go:117] "RemoveContainer" containerID="2a6ab209bd966efc9fd72b7e15606d9abdcae91a4d0ea8bbed37d6e208839aae" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.901612 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" event={"ID":"1b040de7-ae88-4bea-926d-2e1c27575c6f","Type":"ContainerDied","Data":"1e11913cd4914ad7d8f03f3b363447db688cb927e30f02cacc2184932f2f6d37"} Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.901917 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86cb85f869-8n66x" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.916284 4620 scope.go:117] "RemoveContainer" containerID="10340dd163b4399d1ce5390e4a6d6027cd66d0aac45e2974d404aeeb702a555f" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.941703 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn"] Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.946915 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6596989664-5rqsn"] Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.948914 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86cb85f869-8n66x"] Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.951229 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86cb85f869-8n66x"] Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.977774 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-867f79b5f7-tkc68"] Jan 29 15:04:57 crc kubenswrapper[4620]: E0129 15:04:57.978155 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b040de7-ae88-4bea-926d-2e1c27575c6f" containerName="controller-manager" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.978208 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b040de7-ae88-4bea-926d-2e1c27575c6f" containerName="controller-manager" Jan 29 15:04:57 crc kubenswrapper[4620]: E0129 15:04:57.978230 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e141a1-4cc9-4674-9e81-ea51e14590ee" containerName="route-controller-manager" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.978239 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e141a1-4cc9-4674-9e81-ea51e14590ee" containerName="route-controller-manager" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.978410 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b040de7-ae88-4bea-926d-2e1c27575c6f" containerName="controller-manager" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.978428 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="05e141a1-4cc9-4674-9e81-ea51e14590ee" containerName="route-controller-manager" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.978888 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.980768 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr"] Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.981384 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.983526 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.983855 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.984477 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.984721 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.984973 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.985492 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.985527 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.985644 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.985683 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.985695 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.985644 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.985842 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:04:57 crc kubenswrapper[4620]: I0129 15:04:57.993651 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.004067 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-867f79b5f7-tkc68"] Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.020385 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr"] Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.086704 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adf086c3-518a-4b5b-a96b-501f739c32c2-serving-cert\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.086871 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-client-ca\") pod \"route-controller-manager-b64b477c5-ml7wr\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.086945 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-config\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.086969 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-serving-cert\") pod \"route-controller-manager-b64b477c5-ml7wr\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.087055 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-client-ca\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.087115 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t28f8\" (UniqueName: \"kubernetes.io/projected/adf086c3-518a-4b5b-a96b-501f739c32c2-kube-api-access-t28f8\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.087147 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-config\") pod \"route-controller-manager-b64b477c5-ml7wr\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.087238 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl46g\" (UniqueName: \"kubernetes.io/projected/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-kube-api-access-zl46g\") pod \"route-controller-manager-b64b477c5-ml7wr\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.087295 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-proxy-ca-bundles\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.188970 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl46g\" (UniqueName: \"kubernetes.io/projected/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-kube-api-access-zl46g\") pod \"route-controller-manager-b64b477c5-ml7wr\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.189287 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-proxy-ca-bundles\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.189430 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adf086c3-518a-4b5b-a96b-501f739c32c2-serving-cert\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.189549 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-client-ca\") pod \"route-controller-manager-b64b477c5-ml7wr\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.189647 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-config\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.189726 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-serving-cert\") pod \"route-controller-manager-b64b477c5-ml7wr\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.189863 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-client-ca\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.189949 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t28f8\" (UniqueName: \"kubernetes.io/projected/adf086c3-518a-4b5b-a96b-501f739c32c2-kube-api-access-t28f8\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.190055 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-config\") pod \"route-controller-manager-b64b477c5-ml7wr\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.190708 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-proxy-ca-bundles\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.190788 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-client-ca\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.190936 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-config\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.191261 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-client-ca\") pod \"route-controller-manager-b64b477c5-ml7wr\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.191599 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-config\") pod \"route-controller-manager-b64b477c5-ml7wr\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.201512 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-serving-cert\") pod \"route-controller-manager-b64b477c5-ml7wr\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.202434 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adf086c3-518a-4b5b-a96b-501f739c32c2-serving-cert\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.208721 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t28f8\" (UniqueName: \"kubernetes.io/projected/adf086c3-518a-4b5b-a96b-501f739c32c2-kube-api-access-t28f8\") pod \"controller-manager-867f79b5f7-tkc68\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.218870 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl46g\" (UniqueName: \"kubernetes.io/projected/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-kube-api-access-zl46g\") pod \"route-controller-manager-b64b477c5-ml7wr\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.301984 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.321726 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.709281 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-867f79b5f7-tkc68"] Jan 29 15:04:58 crc kubenswrapper[4620]: W0129 15:04:58.717663 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadf086c3_518a_4b5b_a96b_501f739c32c2.slice/crio-5eec6cbcd76943b38c7792f6d5615adafc2b7f1021a762e994a59418039cae44 WatchSource:0}: Error finding container 5eec6cbcd76943b38c7792f6d5615adafc2b7f1021a762e994a59418039cae44: Status 404 returned error can't find the container with id 5eec6cbcd76943b38c7792f6d5615adafc2b7f1021a762e994a59418039cae44 Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.758032 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr"] Jan 29 15:04:58 crc kubenswrapper[4620]: W0129 15:04:58.768620 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcd0efa2_8986_4aee_a7cb_cdc47ad7ea64.slice/crio-6801aecb675a8e7de8fd5de66777be2de4c70069f66c931f185f9222b3717641 WatchSource:0}: Error finding container 6801aecb675a8e7de8fd5de66777be2de4c70069f66c931f185f9222b3717641: Status 404 returned error can't find the container with id 6801aecb675a8e7de8fd5de66777be2de4c70069f66c931f185f9222b3717641 Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.887008 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05e141a1-4cc9-4674-9e81-ea51e14590ee" path="/var/lib/kubelet/pods/05e141a1-4cc9-4674-9e81-ea51e14590ee/volumes" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.887901 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b040de7-ae88-4bea-926d-2e1c27575c6f" path="/var/lib/kubelet/pods/1b040de7-ae88-4bea-926d-2e1c27575c6f/volumes" Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.911280 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" event={"ID":"adf086c3-518a-4b5b-a96b-501f739c32c2","Type":"ContainerStarted","Data":"5eec6cbcd76943b38c7792f6d5615adafc2b7f1021a762e994a59418039cae44"} Jan 29 15:04:58 crc kubenswrapper[4620]: I0129 15:04:58.912606 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" event={"ID":"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64","Type":"ContainerStarted","Data":"6801aecb675a8e7de8fd5de66777be2de4c70069f66c931f185f9222b3717641"} Jan 29 15:04:59 crc kubenswrapper[4620]: I0129 15:04:59.932911 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" event={"ID":"adf086c3-518a-4b5b-a96b-501f739c32c2","Type":"ContainerStarted","Data":"96b15e6048d499ac3557fac6a29e5e61127f0be585cf0f925cf77bb12de59d71"} Jan 29 15:04:59 crc kubenswrapper[4620]: I0129 15:04:59.933305 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:59 crc kubenswrapper[4620]: I0129 15:04:59.939926 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" event={"ID":"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64","Type":"ContainerStarted","Data":"de9704558922d4c26c7e7fe59f5c67002b16fd8cd5a45af29778dc9870187866"} Jan 29 15:04:59 crc kubenswrapper[4620]: I0129 15:04:59.940633 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:59 crc kubenswrapper[4620]: I0129 15:04:59.940713 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:04:59 crc kubenswrapper[4620]: I0129 15:04:59.947105 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:04:59 crc kubenswrapper[4620]: I0129 15:04:59.953534 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" podStartSLOduration=3.953512321 podStartE2EDuration="3.953512321s" podCreationTimestamp="2026-01-29 15:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:04:59.953419518 +0000 UTC m=+240.566247163" watchObservedRunningTime="2026-01-29 15:04:59.953512321 +0000 UTC m=+240.566339976" Jan 29 15:04:59 crc kubenswrapper[4620]: I0129 15:04:59.999653 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" podStartSLOduration=3.999631768 podStartE2EDuration="3.999631768s" podCreationTimestamp="2026-01-29 15:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:04:59.996978123 +0000 UTC m=+240.609805758" watchObservedRunningTime="2026-01-29 15:04:59.999631768 +0000 UTC m=+240.612459413" Jan 29 15:05:00 crc kubenswrapper[4620]: I0129 15:05:00.870395 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" podUID="7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" containerName="oauth-openshift" containerID="cri-o://e794e243577e75c38d03d6160ff386462c38f5ebd3ec7802afe69d2a35a3f23d" gracePeriod=15 Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.285793 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341051 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-router-certs\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341111 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-idp-0-file-data\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341139 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-service-ca\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341161 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-login\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341195 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-audit-policies\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341230 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-session\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341268 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-serving-cert\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341314 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-trusted-ca-bundle\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341338 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-ocp-branding-template\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341373 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85gmx\" (UniqueName: \"kubernetes.io/projected/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-kube-api-access-85gmx\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341395 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-error\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341423 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-cliconfig\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341449 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-provider-selection\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341495 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-audit-dir\") pod \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\" (UID: \"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2\") " Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341837 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.341835 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.342218 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.342378 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.343117 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.348903 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.349204 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.349930 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.350519 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.352456 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.352649 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.353453 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-kube-api-access-85gmx" (OuterVolumeSpecName: "kube-api-access-85gmx") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "kube-api-access-85gmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.355596 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.366248 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" (UID: "7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443532 4620 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443589 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443601 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443613 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443627 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443638 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85gmx\" (UniqueName: \"kubernetes.io/projected/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-kube-api-access-85gmx\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443650 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443662 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443671 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443683 4620 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443693 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443705 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443717 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.443734 4620 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.953670 4620 generic.go:334] "Generic (PLEG): container finished" podID="7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" containerID="e794e243577e75c38d03d6160ff386462c38f5ebd3ec7802afe69d2a35a3f23d" exitCode=0 Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.953798 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.953866 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" event={"ID":"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2","Type":"ContainerDied","Data":"e794e243577e75c38d03d6160ff386462c38f5ebd3ec7802afe69d2a35a3f23d"} Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.953919 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8q7cm" event={"ID":"7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2","Type":"ContainerDied","Data":"ec98a91fd2fe72f0344b66f9c97effb80647b792711b180fb1450ec88cf2ec07"} Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.953944 4620 scope.go:117] "RemoveContainer" containerID="e794e243577e75c38d03d6160ff386462c38f5ebd3ec7802afe69d2a35a3f23d" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.980389 4620 scope.go:117] "RemoveContainer" containerID="e794e243577e75c38d03d6160ff386462c38f5ebd3ec7802afe69d2a35a3f23d" Jan 29 15:05:01 crc kubenswrapper[4620]: E0129 15:05:01.980844 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e794e243577e75c38d03d6160ff386462c38f5ebd3ec7802afe69d2a35a3f23d\": container with ID starting with e794e243577e75c38d03d6160ff386462c38f5ebd3ec7802afe69d2a35a3f23d not found: ID does not exist" containerID="e794e243577e75c38d03d6160ff386462c38f5ebd3ec7802afe69d2a35a3f23d" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.980910 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e794e243577e75c38d03d6160ff386462c38f5ebd3ec7802afe69d2a35a3f23d"} err="failed to get container status \"e794e243577e75c38d03d6160ff386462c38f5ebd3ec7802afe69d2a35a3f23d\": rpc error: code = NotFound desc = could not find container \"e794e243577e75c38d03d6160ff386462c38f5ebd3ec7802afe69d2a35a3f23d\": container with ID starting with e794e243577e75c38d03d6160ff386462c38f5ebd3ec7802afe69d2a35a3f23d not found: ID does not exist" Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.985898 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8q7cm"] Jan 29 15:05:01 crc kubenswrapper[4620]: I0129 15:05:01.990843 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8q7cm"] Jan 29 15:05:02 crc kubenswrapper[4620]: E0129 15:05:02.874249 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xwr75" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" Jan 29 15:05:02 crc kubenswrapper[4620]: E0129 15:05:02.874382 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jn68n" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" Jan 29 15:05:02 crc kubenswrapper[4620]: I0129 15:05:02.882097 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" path="/var/lib/kubelet/pods/7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2/volumes" Jan 29 15:05:04 crc kubenswrapper[4620]: I0129 15:05:04.111386 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:05:04 crc kubenswrapper[4620]: I0129 15:05:04.111986 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:05:04 crc kubenswrapper[4620]: I0129 15:05:04.112056 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:05:04 crc kubenswrapper[4620]: I0129 15:05:04.112932 4620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529"} pod="openshift-machine-config-operator/machine-config-daemon-7469t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:05:04 crc kubenswrapper[4620]: I0129 15:05:04.112998 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" containerID="cri-o://2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529" gracePeriod=600 Jan 29 15:05:04 crc kubenswrapper[4620]: E0129 15:05:04.877405 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tqw2q" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" Jan 29 15:05:04 crc kubenswrapper[4620]: E0129 15:05:04.877435 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l8bmh" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" Jan 29 15:05:04 crc kubenswrapper[4620]: E0129 15:05:04.877535 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-669zb" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" Jan 29 15:05:04 crc kubenswrapper[4620]: I0129 15:05:04.974041 4620 generic.go:334] "Generic (PLEG): container finished" podID="a76cce43-3d01-4158-b23a-e21fd5927792" containerID="2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529" exitCode=0 Jan 29 15:05:04 crc kubenswrapper[4620]: I0129 15:05:04.974091 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerDied","Data":"2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529"} Jan 29 15:05:04 crc kubenswrapper[4620]: I0129 15:05:04.974119 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"148ab2afd86389a773d0f644041231ed6425f5f254b9294c6c0a376e8daec7d9"} Jan 29 15:05:06 crc kubenswrapper[4620]: E0129 15:05:06.877011 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mzcfs" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.011247 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-f9d58f4c-dh7gn"] Jan 29 15:05:08 crc kubenswrapper[4620]: E0129 15:05:08.012741 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" containerName="oauth-openshift" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.012858 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" containerName="oauth-openshift" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.013254 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e5fc0ff-9c52-4dba-9fa4-3d5d76a44ac2" containerName="oauth-openshift" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.014882 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.018196 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.018723 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f9d58f4c-dh7gn"] Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.018790 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.019357 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.019537 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.028274 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.030645 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.031015 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.031534 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.032289 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.033672 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.037204 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.046119 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.047429 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.058061 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.060493 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145159 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145219 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2l8v\" (UniqueName: \"kubernetes.io/projected/b8cd5482-58fa-4aa2-8605-97a15267b62a-kube-api-access-c2l8v\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145248 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145289 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8cd5482-58fa-4aa2-8605-97a15267b62a-audit-policies\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145313 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-router-certs\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145343 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145367 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145405 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8cd5482-58fa-4aa2-8605-97a15267b62a-audit-dir\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145428 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-service-ca\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145451 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-user-template-error\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145474 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145513 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-user-template-login\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145563 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.145596 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-session\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.246573 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-session\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.246929 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.246953 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2l8v\" (UniqueName: \"kubernetes.io/projected/b8cd5482-58fa-4aa2-8605-97a15267b62a-kube-api-access-c2l8v\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.246970 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.247009 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8cd5482-58fa-4aa2-8605-97a15267b62a-audit-policies\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.247040 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-router-certs\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.247070 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.247093 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.247121 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8cd5482-58fa-4aa2-8605-97a15267b62a-audit-dir\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.247136 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-user-template-error\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.247153 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-service-ca\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.247188 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.247229 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-user-template-login\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.247284 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.248400 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.248473 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b8cd5482-58fa-4aa2-8605-97a15267b62a-audit-dir\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.249176 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.249727 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b8cd5482-58fa-4aa2-8605-97a15267b62a-audit-policies\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.252054 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-user-template-error\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.252310 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-session\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.252809 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-service-ca\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.253132 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.253138 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.254303 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-system-router-certs\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.255183 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.261429 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-user-template-login\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.261441 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b8cd5482-58fa-4aa2-8605-97a15267b62a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.266322 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2l8v\" (UniqueName: \"kubernetes.io/projected/b8cd5482-58fa-4aa2-8605-97a15267b62a-kube-api-access-c2l8v\") pod \"oauth-openshift-f9d58f4c-dh7gn\" (UID: \"b8cd5482-58fa-4aa2-8605-97a15267b62a\") " pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.359022 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:08 crc kubenswrapper[4620]: I0129 15:05:08.787683 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f9d58f4c-dh7gn"] Jan 29 15:05:09 crc kubenswrapper[4620]: I0129 15:05:09.003406 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" event={"ID":"b8cd5482-58fa-4aa2-8605-97a15267b62a","Type":"ContainerStarted","Data":"be6ccb3dde9aea8ff7754358ffb3f3e4d86b0d8a45bfd2937162255e267bf6df"} Jan 29 15:05:09 crc kubenswrapper[4620]: E0129 15:05:09.874561 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lg725" podUID="2aaa7ad3-f4ef-4a15-993d-43166028f71b" Jan 29 15:05:10 crc kubenswrapper[4620]: I0129 15:05:10.010855 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" event={"ID":"b8cd5482-58fa-4aa2-8605-97a15267b62a","Type":"ContainerStarted","Data":"e7e268d3e1cdd95477695e05d449049c86ef6e889f7ad115ed882f7de9ae1f09"} Jan 29 15:05:10 crc kubenswrapper[4620]: I0129 15:05:10.013136 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:10 crc kubenswrapper[4620]: I0129 15:05:10.017687 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" Jan 29 15:05:10 crc kubenswrapper[4620]: I0129 15:05:10.035661 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-f9d58f4c-dh7gn" podStartSLOduration=35.03563878 podStartE2EDuration="35.03563878s" podCreationTimestamp="2026-01-29 15:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:05:10.030048359 +0000 UTC m=+250.642876044" watchObservedRunningTime="2026-01-29 15:05:10.03563878 +0000 UTC m=+250.648466425" Jan 29 15:05:10 crc kubenswrapper[4620]: E0129 15:05:10.882634 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mn8xm" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" Jan 29 15:05:14 crc kubenswrapper[4620]: E0129 15:05:14.874365 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jn68n" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" Jan 29 15:05:15 crc kubenswrapper[4620]: E0129 15:05:15.874602 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tqw2q" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" Jan 29 15:05:16 crc kubenswrapper[4620]: I0129 15:05:16.448447 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-867f79b5f7-tkc68"] Jan 29 15:05:16 crc kubenswrapper[4620]: I0129 15:05:16.449350 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" podUID="adf086c3-518a-4b5b-a96b-501f739c32c2" containerName="controller-manager" containerID="cri-o://96b15e6048d499ac3557fac6a29e5e61127f0be585cf0f925cf77bb12de59d71" gracePeriod=30 Jan 29 15:05:16 crc kubenswrapper[4620]: I0129 15:05:16.549310 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr"] Jan 29 15:05:16 crc kubenswrapper[4620]: I0129 15:05:16.549832 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" podUID="fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64" containerName="route-controller-manager" containerID="cri-o://de9704558922d4c26c7e7fe59f5c67002b16fd8cd5a45af29778dc9870187866" gracePeriod=30 Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.046576 4620 generic.go:334] "Generic (PLEG): container finished" podID="fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64" containerID="de9704558922d4c26c7e7fe59f5c67002b16fd8cd5a45af29778dc9870187866" exitCode=0 Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.046917 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" event={"ID":"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64","Type":"ContainerDied","Data":"de9704558922d4c26c7e7fe59f5c67002b16fd8cd5a45af29778dc9870187866"} Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.046953 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" event={"ID":"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64","Type":"ContainerDied","Data":"6801aecb675a8e7de8fd5de66777be2de4c70069f66c931f185f9222b3717641"} Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.046967 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6801aecb675a8e7de8fd5de66777be2de4c70069f66c931f185f9222b3717641" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.048086 4620 generic.go:334] "Generic (PLEG): container finished" podID="adf086c3-518a-4b5b-a96b-501f739c32c2" containerID="96b15e6048d499ac3557fac6a29e5e61127f0be585cf0f925cf77bb12de59d71" exitCode=0 Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.048112 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" event={"ID":"adf086c3-518a-4b5b-a96b-501f739c32c2","Type":"ContainerDied","Data":"96b15e6048d499ac3557fac6a29e5e61127f0be585cf0f925cf77bb12de59d71"} Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.052254 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.068605 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-config\") pod \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.068648 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-client-ca\") pod \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.068668 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-serving-cert\") pod \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.068771 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl46g\" (UniqueName: \"kubernetes.io/projected/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-kube-api-access-zl46g\") pod \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\" (UID: \"fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64\") " Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.070065 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-config" (OuterVolumeSpecName: "config") pod "fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64" (UID: "fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.070244 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-client-ca" (OuterVolumeSpecName: "client-ca") pod "fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64" (UID: "fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.079609 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64" (UID: "fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.079714 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-kube-api-access-zl46g" (OuterVolumeSpecName: "kube-api-access-zl46g") pod "fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64" (UID: "fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64"). InnerVolumeSpecName "kube-api-access-zl46g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.169972 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.170006 4620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.170021 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.170034 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl46g\" (UniqueName: \"kubernetes.io/projected/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64-kube-api-access-zl46g\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.482420 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.574167 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-proxy-ca-bundles\") pod \"adf086c3-518a-4b5b-a96b-501f739c32c2\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.574335 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-client-ca\") pod \"adf086c3-518a-4b5b-a96b-501f739c32c2\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.574369 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-config\") pod \"adf086c3-518a-4b5b-a96b-501f739c32c2\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.574409 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t28f8\" (UniqueName: \"kubernetes.io/projected/adf086c3-518a-4b5b-a96b-501f739c32c2-kube-api-access-t28f8\") pod \"adf086c3-518a-4b5b-a96b-501f739c32c2\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.574436 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adf086c3-518a-4b5b-a96b-501f739c32c2-serving-cert\") pod \"adf086c3-518a-4b5b-a96b-501f739c32c2\" (UID: \"adf086c3-518a-4b5b-a96b-501f739c32c2\") " Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.574935 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "adf086c3-518a-4b5b-a96b-501f739c32c2" (UID: "adf086c3-518a-4b5b-a96b-501f739c32c2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.575514 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-config" (OuterVolumeSpecName: "config") pod "adf086c3-518a-4b5b-a96b-501f739c32c2" (UID: "adf086c3-518a-4b5b-a96b-501f739c32c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.575590 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-client-ca" (OuterVolumeSpecName: "client-ca") pod "adf086c3-518a-4b5b-a96b-501f739c32c2" (UID: "adf086c3-518a-4b5b-a96b-501f739c32c2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.579904 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf086c3-518a-4b5b-a96b-501f739c32c2-kube-api-access-t28f8" (OuterVolumeSpecName: "kube-api-access-t28f8") pod "adf086c3-518a-4b5b-a96b-501f739c32c2" (UID: "adf086c3-518a-4b5b-a96b-501f739c32c2"). InnerVolumeSpecName "kube-api-access-t28f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.581915 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf086c3-518a-4b5b-a96b-501f739c32c2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "adf086c3-518a-4b5b-a96b-501f739c32c2" (UID: "adf086c3-518a-4b5b-a96b-501f739c32c2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.676079 4620 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adf086c3-518a-4b5b-a96b-501f739c32c2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.676112 4620 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.676121 4620 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.676129 4620 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adf086c3-518a-4b5b-a96b-501f739c32c2-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.676137 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t28f8\" (UniqueName: \"kubernetes.io/projected/adf086c3-518a-4b5b-a96b-501f739c32c2-kube-api-access-t28f8\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:17 crc kubenswrapper[4620]: E0129 15:05:17.874741 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l8bmh" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" Jan 29 15:05:17 crc kubenswrapper[4620]: E0129 15:05:17.876664 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xwr75" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.993054 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt"] Jan 29 15:05:17 crc kubenswrapper[4620]: E0129 15:05:17.993374 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf086c3-518a-4b5b-a96b-501f739c32c2" containerName="controller-manager" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.993398 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf086c3-518a-4b5b-a96b-501f739c32c2" containerName="controller-manager" Jan 29 15:05:17 crc kubenswrapper[4620]: E0129 15:05:17.993421 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64" containerName="route-controller-manager" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.993432 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64" containerName="route-controller-manager" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.993653 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64" containerName="route-controller-manager" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.993673 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf086c3-518a-4b5b-a96b-501f739c32c2" containerName="controller-manager" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.994367 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.998326 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84f77569bb-4qtqb"] Jan 29 15:05:17 crc kubenswrapper[4620]: I0129 15:05:17.999336 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.008066 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84f77569bb-4qtqb"] Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.010717 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt"] Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.053529 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.053532 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.053529 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867f79b5f7-tkc68" event={"ID":"adf086c3-518a-4b5b-a96b-501f739c32c2","Type":"ContainerDied","Data":"5eec6cbcd76943b38c7792f6d5615adafc2b7f1021a762e994a59418039cae44"} Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.053742 4620 scope.go:117] "RemoveContainer" containerID="96b15e6048d499ac3557fac6a29e5e61127f0be585cf0f925cf77bb12de59d71" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.082172 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmzgl\" (UniqueName: \"kubernetes.io/projected/00e625fd-f1a3-431a-a24b-23e358d35080-kube-api-access-bmzgl\") pod \"route-controller-manager-686cc8c8bb-7rmqt\" (UID: \"00e625fd-f1a3-431a-a24b-23e358d35080\") " pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.082217 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-config\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.082324 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00e625fd-f1a3-431a-a24b-23e358d35080-client-ca\") pod \"route-controller-manager-686cc8c8bb-7rmqt\" (UID: \"00e625fd-f1a3-431a-a24b-23e358d35080\") " pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.082398 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00e625fd-f1a3-431a-a24b-23e358d35080-serving-cert\") pod \"route-controller-manager-686cc8c8bb-7rmqt\" (UID: \"00e625fd-f1a3-431a-a24b-23e358d35080\") " pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.082431 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-proxy-ca-bundles\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.082464 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-client-ca\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.082486 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e625fd-f1a3-431a-a24b-23e358d35080-config\") pod \"route-controller-manager-686cc8c8bb-7rmqt\" (UID: \"00e625fd-f1a3-431a-a24b-23e358d35080\") " pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.082529 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-serving-cert\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.082546 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx527\" (UniqueName: \"kubernetes.io/projected/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-kube-api-access-zx527\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.086285 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-867f79b5f7-tkc68"] Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.089237 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-867f79b5f7-tkc68"] Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.094771 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr"] Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.101529 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b64b477c5-ml7wr"] Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.183669 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-proxy-ca-bundles\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.183715 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-client-ca\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.183738 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e625fd-f1a3-431a-a24b-23e358d35080-config\") pod \"route-controller-manager-686cc8c8bb-7rmqt\" (UID: \"00e625fd-f1a3-431a-a24b-23e358d35080\") " pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.183759 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-serving-cert\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.183795 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx527\" (UniqueName: \"kubernetes.io/projected/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-kube-api-access-zx527\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.183820 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmzgl\" (UniqueName: \"kubernetes.io/projected/00e625fd-f1a3-431a-a24b-23e358d35080-kube-api-access-bmzgl\") pod \"route-controller-manager-686cc8c8bb-7rmqt\" (UID: \"00e625fd-f1a3-431a-a24b-23e358d35080\") " pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.183836 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-config\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.183896 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00e625fd-f1a3-431a-a24b-23e358d35080-client-ca\") pod \"route-controller-manager-686cc8c8bb-7rmqt\" (UID: \"00e625fd-f1a3-431a-a24b-23e358d35080\") " pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.183930 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00e625fd-f1a3-431a-a24b-23e358d35080-serving-cert\") pod \"route-controller-manager-686cc8c8bb-7rmqt\" (UID: \"00e625fd-f1a3-431a-a24b-23e358d35080\") " pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.185291 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-client-ca\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.185347 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00e625fd-f1a3-431a-a24b-23e358d35080-client-ca\") pod \"route-controller-manager-686cc8c8bb-7rmqt\" (UID: \"00e625fd-f1a3-431a-a24b-23e358d35080\") " pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.185502 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00e625fd-f1a3-431a-a24b-23e358d35080-config\") pod \"route-controller-manager-686cc8c8bb-7rmqt\" (UID: \"00e625fd-f1a3-431a-a24b-23e358d35080\") " pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.185756 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-config\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.186091 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-proxy-ca-bundles\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.195854 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-serving-cert\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.201076 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx527\" (UniqueName: \"kubernetes.io/projected/f3c3b698-e2d2-4784-9c2b-06ed1427cb8e-kube-api-access-zx527\") pod \"controller-manager-84f77569bb-4qtqb\" (UID: \"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e\") " pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.203213 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00e625fd-f1a3-431a-a24b-23e358d35080-serving-cert\") pod \"route-controller-manager-686cc8c8bb-7rmqt\" (UID: \"00e625fd-f1a3-431a-a24b-23e358d35080\") " pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.203534 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmzgl\" (UniqueName: \"kubernetes.io/projected/00e625fd-f1a3-431a-a24b-23e358d35080-kube-api-access-bmzgl\") pod \"route-controller-manager-686cc8c8bb-7rmqt\" (UID: \"00e625fd-f1a3-431a-a24b-23e358d35080\") " pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.321650 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.332750 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.716895 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt"] Jan 29 15:05:18 crc kubenswrapper[4620]: W0129 15:05:18.724187 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00e625fd_f1a3_431a_a24b_23e358d35080.slice/crio-6facf3a13ef310f6568808b4c10c515fdeafc95d0fe3a2d3ac3d47fa046ff6ba WatchSource:0}: Error finding container 6facf3a13ef310f6568808b4c10c515fdeafc95d0fe3a2d3ac3d47fa046ff6ba: Status 404 returned error can't find the container with id 6facf3a13ef310f6568808b4c10c515fdeafc95d0fe3a2d3ac3d47fa046ff6ba Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.786302 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84f77569bb-4qtqb"] Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.883966 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf086c3-518a-4b5b-a96b-501f739c32c2" path="/var/lib/kubelet/pods/adf086c3-518a-4b5b-a96b-501f739c32c2/volumes" Jan 29 15:05:18 crc kubenswrapper[4620]: I0129 15:05:18.884613 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64" path="/var/lib/kubelet/pods/fcd0efa2-8986-4aee-a7cb-cdc47ad7ea64/volumes" Jan 29 15:05:19 crc kubenswrapper[4620]: I0129 15:05:19.058321 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" event={"ID":"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e","Type":"ContainerStarted","Data":"38b3235d3353f1e7bbbab300ed5f2855536047f233db650584e1eee7d118e988"} Jan 29 15:05:19 crc kubenswrapper[4620]: I0129 15:05:19.058649 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" event={"ID":"f3c3b698-e2d2-4784-9c2b-06ed1427cb8e","Type":"ContainerStarted","Data":"b53b34e3cc241d2667e5a916ea50cffc2c277bb2483d48a6e0fb341eace34bc5"} Jan 29 15:05:19 crc kubenswrapper[4620]: I0129 15:05:19.059622 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:19 crc kubenswrapper[4620]: I0129 15:05:19.061733 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" event={"ID":"00e625fd-f1a3-431a-a24b-23e358d35080","Type":"ContainerStarted","Data":"27dc3971fa6f80416918aa86dd15a116431e872310d77eebd901164cfa3a8cc3"} Jan 29 15:05:19 crc kubenswrapper[4620]: I0129 15:05:19.061760 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" event={"ID":"00e625fd-f1a3-431a-a24b-23e358d35080","Type":"ContainerStarted","Data":"6facf3a13ef310f6568808b4c10c515fdeafc95d0fe3a2d3ac3d47fa046ff6ba"} Jan 29 15:05:19 crc kubenswrapper[4620]: I0129 15:05:19.062259 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:19 crc kubenswrapper[4620]: I0129 15:05:19.069418 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" Jan 29 15:05:19 crc kubenswrapper[4620]: I0129 15:05:19.084651 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84f77569bb-4qtqb" podStartSLOduration=3.084633313 podStartE2EDuration="3.084633313s" podCreationTimestamp="2026-01-29 15:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:05:19.08174826 +0000 UTC m=+259.694575915" watchObservedRunningTime="2026-01-29 15:05:19.084633313 +0000 UTC m=+259.697460958" Jan 29 15:05:19 crc kubenswrapper[4620]: I0129 15:05:19.354870 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" Jan 29 15:05:19 crc kubenswrapper[4620]: I0129 15:05:19.388830 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-686cc8c8bb-7rmqt" podStartSLOduration=3.388811526 podStartE2EDuration="3.388811526s" podCreationTimestamp="2026-01-29 15:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:05:19.134738958 +0000 UTC m=+259.747566613" watchObservedRunningTime="2026-01-29 15:05:19.388811526 +0000 UTC m=+260.001639171" Jan 29 15:05:19 crc kubenswrapper[4620]: E0129 15:05:19.874061 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-669zb" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" Jan 29 15:05:20 crc kubenswrapper[4620]: E0129 15:05:20.876330 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mzcfs" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" Jan 29 15:05:20 crc kubenswrapper[4620]: E0129 15:05:20.877843 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lg725" podUID="2aaa7ad3-f4ef-4a15-993d-43166028f71b" Jan 29 15:05:24 crc kubenswrapper[4620]: E0129 15:05:24.875938 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mn8xm" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.375497 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lg725"] Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.385699 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzcfs"] Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.403073 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jn68n"] Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.422817 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l8bmh"] Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.432306 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zsh8m"] Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.432710 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" containerName="marketplace-operator" containerID="cri-o://b54ea343b2ffad95c7a0f0fb31efb8d3f4d9e71ef55fe8c545e2c3932946b840" gracePeriod=30 Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.441599 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mn8xm"] Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.447672 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xwr75"] Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.454021 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8tnf"] Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.454836 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.458838 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-669zb"] Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.478584 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c44039e8-f318-4ec2-bd3c-587834aa4bb8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f8tnf\" (UID: \"c44039e8-f318-4ec2-bd3c-587834aa4bb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.478658 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzgn7\" (UniqueName: \"kubernetes.io/projected/c44039e8-f318-4ec2-bd3c-587834aa4bb8-kube-api-access-vzgn7\") pod \"marketplace-operator-79b997595-f8tnf\" (UID: \"c44039e8-f318-4ec2-bd3c-587834aa4bb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.478695 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c44039e8-f318-4ec2-bd3c-587834aa4bb8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f8tnf\" (UID: \"c44039e8-f318-4ec2-bd3c-587834aa4bb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.479028 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8tnf"] Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.480176 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tqw2q"] Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.584441 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c44039e8-f318-4ec2-bd3c-587834aa4bb8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f8tnf\" (UID: \"c44039e8-f318-4ec2-bd3c-587834aa4bb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.584543 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c44039e8-f318-4ec2-bd3c-587834aa4bb8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f8tnf\" (UID: \"c44039e8-f318-4ec2-bd3c-587834aa4bb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.584615 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzgn7\" (UniqueName: \"kubernetes.io/projected/c44039e8-f318-4ec2-bd3c-587834aa4bb8-kube-api-access-vzgn7\") pod \"marketplace-operator-79b997595-f8tnf\" (UID: \"c44039e8-f318-4ec2-bd3c-587834aa4bb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.587356 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c44039e8-f318-4ec2-bd3c-587834aa4bb8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f8tnf\" (UID: \"c44039e8-f318-4ec2-bd3c-587834aa4bb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.613783 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzgn7\" (UniqueName: \"kubernetes.io/projected/c44039e8-f318-4ec2-bd3c-587834aa4bb8-kube-api-access-vzgn7\") pod \"marketplace-operator-79b997595-f8tnf\" (UID: \"c44039e8-f318-4ec2-bd3c-587834aa4bb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.622565 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c44039e8-f318-4ec2-bd3c-587834aa4bb8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f8tnf\" (UID: \"c44039e8-f318-4ec2-bd3c-587834aa4bb8\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.782851 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.893806 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lg725" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.990171 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aaa7ad3-f4ef-4a15-993d-43166028f71b-utilities\") pod \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\" (UID: \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\") " Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.990218 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aaa7ad3-f4ef-4a15-993d-43166028f71b-catalog-content\") pod \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\" (UID: \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\") " Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.990296 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzm4c\" (UniqueName: \"kubernetes.io/projected/2aaa7ad3-f4ef-4a15-993d-43166028f71b-kube-api-access-mzm4c\") pod \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\" (UID: \"2aaa7ad3-f4ef-4a15-993d-43166028f71b\") " Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.990999 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aaa7ad3-f4ef-4a15-993d-43166028f71b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2aaa7ad3-f4ef-4a15-993d-43166028f71b" (UID: "2aaa7ad3-f4ef-4a15-993d-43166028f71b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.991552 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aaa7ad3-f4ef-4a15-993d-43166028f71b-utilities" (OuterVolumeSpecName: "utilities") pod "2aaa7ad3-f4ef-4a15-993d-43166028f71b" (UID: "2aaa7ad3-f4ef-4a15-993d-43166028f71b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.991962 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aaa7ad3-f4ef-4a15-993d-43166028f71b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.991987 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aaa7ad3-f4ef-4a15-993d-43166028f71b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:25 crc kubenswrapper[4620]: I0129 15:05:25.994451 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aaa7ad3-f4ef-4a15-993d-43166028f71b-kube-api-access-mzm4c" (OuterVolumeSpecName: "kube-api-access-mzm4c") pod "2aaa7ad3-f4ef-4a15-993d-43166028f71b" (UID: "2aaa7ad3-f4ef-4a15-993d-43166028f71b"). InnerVolumeSpecName "kube-api-access-mzm4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.094763 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzm4c\" (UniqueName: \"kubernetes.io/projected/2aaa7ad3-f4ef-4a15-993d-43166028f71b-kube-api-access-mzm4c\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.110970 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lg725" event={"ID":"2aaa7ad3-f4ef-4a15-993d-43166028f71b","Type":"ContainerDied","Data":"95c222983709117248415b7744e61e9241e2b2785ce891797c4dba5a045ffab7"} Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.111019 4620 scope.go:117] "RemoveContainer" containerID="1b935450c9063427d649f9e2d69f15f84dfc4064df6f628097f2ba9abb059d69" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.111126 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lg725" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.119378 4620 generic.go:334] "Generic (PLEG): container finished" podID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" containerID="b54ea343b2ffad95c7a0f0fb31efb8d3f4d9e71ef55fe8c545e2c3932946b840" exitCode=0 Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.119418 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" event={"ID":"7436f3bf-66e4-4314-aa0b-8af645dd5bee","Type":"ContainerDied","Data":"b54ea343b2ffad95c7a0f0fb31efb8d3f4d9e71ef55fe8c545e2c3932946b840"} Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.173215 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lg725"] Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.176875 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lg725"] Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.254493 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.268317 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8bmh" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.290371 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xwr75" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.297356 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-operator-metrics\") pod \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.297418 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dz9m\" (UniqueName: \"kubernetes.io/projected/7436f3bf-66e4-4314-aa0b-8af645dd5bee-kube-api-access-5dz9m\") pod \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.297449 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-utilities\") pod \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\" (UID: \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.297487 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6ws5\" (UniqueName: \"kubernetes.io/projected/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-kube-api-access-w6ws5\") pod \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\" (UID: \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.298663 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-catalog-content\") pod \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\" (UID: \"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.299138 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-trusted-ca\") pod \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\" (UID: \"7436f3bf-66e4-4314-aa0b-8af645dd5bee\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.300392 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-utilities" (OuterVolumeSpecName: "utilities") pod "0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" (UID: "0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.300969 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7436f3bf-66e4-4314-aa0b-8af645dd5bee" (UID: "7436f3bf-66e4-4314-aa0b-8af645dd5bee"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.301136 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" (UID: "0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.305635 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7436f3bf-66e4-4314-aa0b-8af645dd5bee" (UID: "7436f3bf-66e4-4314-aa0b-8af645dd5bee"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.308326 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-kube-api-access-w6ws5" (OuterVolumeSpecName: "kube-api-access-w6ws5") pod "0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" (UID: "0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2"). InnerVolumeSpecName "kube-api-access-w6ws5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.310394 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7436f3bf-66e4-4314-aa0b-8af645dd5bee-kube-api-access-5dz9m" (OuterVolumeSpecName: "kube-api-access-5dz9m") pod "7436f3bf-66e4-4314-aa0b-8af645dd5bee" (UID: "7436f3bf-66e4-4314-aa0b-8af645dd5bee"). InnerVolumeSpecName "kube-api-access-5dz9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.323198 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn68n" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.398203 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-669zb" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.400231 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k7hs\" (UniqueName: \"kubernetes.io/projected/67739a3a-d009-4685-a79a-aaa81f5b2daf-kube-api-access-5k7hs\") pod \"67739a3a-d009-4685-a79a-aaa81f5b2daf\" (UID: \"67739a3a-d009-4685-a79a-aaa81f5b2daf\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.400294 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-utilities\") pod \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\" (UID: \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.400319 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67739a3a-d009-4685-a79a-aaa81f5b2daf-utilities\") pod \"67739a3a-d009-4685-a79a-aaa81f5b2daf\" (UID: \"67739a3a-d009-4685-a79a-aaa81f5b2daf\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.400347 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-catalog-content\") pod \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\" (UID: \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.400369 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbrp7\" (UniqueName: \"kubernetes.io/projected/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-kube-api-access-xbrp7\") pod \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\" (UID: \"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.400406 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67739a3a-d009-4685-a79a-aaa81f5b2daf-catalog-content\") pod \"67739a3a-d009-4685-a79a-aaa81f5b2daf\" (UID: \"67739a3a-d009-4685-a79a-aaa81f5b2daf\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.400697 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.400711 4620 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.400721 4620 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7436f3bf-66e4-4314-aa0b-8af645dd5bee-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.400731 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dz9m\" (UniqueName: \"kubernetes.io/projected/7436f3bf-66e4-4314-aa0b-8af645dd5bee-kube-api-access-5dz9m\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.400739 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.400747 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6ws5\" (UniqueName: \"kubernetes.io/projected/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2-kube-api-access-w6ws5\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.401063 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67739a3a-d009-4685-a79a-aaa81f5b2daf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67739a3a-d009-4685-a79a-aaa81f5b2daf" (UID: "67739a3a-d009-4685-a79a-aaa81f5b2daf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.403706 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67739a3a-d009-4685-a79a-aaa81f5b2daf-utilities" (OuterVolumeSpecName: "utilities") pod "67739a3a-d009-4685-a79a-aaa81f5b2daf" (UID: "67739a3a-d009-4685-a79a-aaa81f5b2daf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.405452 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-utilities" (OuterVolumeSpecName: "utilities") pod "2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" (UID: "2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.407590 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" (UID: "2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.410932 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-kube-api-access-xbrp7" (OuterVolumeSpecName: "kube-api-access-xbrp7") pod "2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" (UID: "2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff"). InnerVolumeSpecName "kube-api-access-xbrp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.436216 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67739a3a-d009-4685-a79a-aaa81f5b2daf-kube-api-access-5k7hs" (OuterVolumeSpecName: "kube-api-access-5k7hs") pod "67739a3a-d009-4685-a79a-aaa81f5b2daf" (UID: "67739a3a-d009-4685-a79a-aaa81f5b2daf"). InnerVolumeSpecName "kube-api-access-5k7hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.440210 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqw2q" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.449483 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzcfs" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.489294 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn8xm" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502086 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-utilities\") pod \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\" (UID: \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502140 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d03ebd-82d4-4ffe-895e-2909c15480d7-catalog-content\") pod \"60d03ebd-82d4-4ffe-895e-2909c15480d7\" (UID: \"60d03ebd-82d4-4ffe-895e-2909c15480d7\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502484 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d03ebd-82d4-4ffe-895e-2909c15480d7-utilities\") pod \"60d03ebd-82d4-4ffe-895e-2909c15480d7\" (UID: \"60d03ebd-82d4-4ffe-895e-2909c15480d7\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502529 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqh7x\" (UniqueName: \"kubernetes.io/projected/a7f8b2b8-8396-425a-94ac-e66deddac937-kube-api-access-cqh7x\") pod \"a7f8b2b8-8396-425a-94ac-e66deddac937\" (UID: \"a7f8b2b8-8396-425a-94ac-e66deddac937\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502558 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-catalog-content\") pod \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\" (UID: \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502573 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8jkp\" (UniqueName: \"kubernetes.io/projected/60d03ebd-82d4-4ffe-895e-2909c15480d7-kube-api-access-c8jkp\") pod \"60d03ebd-82d4-4ffe-895e-2909c15480d7\" (UID: \"60d03ebd-82d4-4ffe-895e-2909c15480d7\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502592 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw6hg\" (UniqueName: \"kubernetes.io/projected/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-kube-api-access-nw6hg\") pod \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\" (UID: \"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502609 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f8b2b8-8396-425a-94ac-e66deddac937-utilities\") pod \"a7f8b2b8-8396-425a-94ac-e66deddac937\" (UID: \"a7f8b2b8-8396-425a-94ac-e66deddac937\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502646 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f8b2b8-8396-425a-94ac-e66deddac937-catalog-content\") pod \"a7f8b2b8-8396-425a-94ac-e66deddac937\" (UID: \"a7f8b2b8-8396-425a-94ac-e66deddac937\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502924 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k7hs\" (UniqueName: \"kubernetes.io/projected/67739a3a-d009-4685-a79a-aaa81f5b2daf-kube-api-access-5k7hs\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502938 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502946 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67739a3a-d009-4685-a79a-aaa81f5b2daf-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502956 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502967 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbrp7\" (UniqueName: \"kubernetes.io/projected/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff-kube-api-access-xbrp7\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.502978 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67739a3a-d009-4685-a79a-aaa81f5b2daf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.503246 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7f8b2b8-8396-425a-94ac-e66deddac937-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7f8b2b8-8396-425a-94ac-e66deddac937" (UID: "a7f8b2b8-8396-425a-94ac-e66deddac937"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.503397 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" (UID: "d401dd8c-5cfc-4cbd-92de-bb9896e90ea0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.503636 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-utilities" (OuterVolumeSpecName: "utilities") pod "d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" (UID: "d401dd8c-5cfc-4cbd-92de-bb9896e90ea0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.503879 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60d03ebd-82d4-4ffe-895e-2909c15480d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60d03ebd-82d4-4ffe-895e-2909c15480d7" (UID: "60d03ebd-82d4-4ffe-895e-2909c15480d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.503926 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7f8b2b8-8396-425a-94ac-e66deddac937-utilities" (OuterVolumeSpecName: "utilities") pod "a7f8b2b8-8396-425a-94ac-e66deddac937" (UID: "a7f8b2b8-8396-425a-94ac-e66deddac937"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.505691 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60d03ebd-82d4-4ffe-895e-2909c15480d7-utilities" (OuterVolumeSpecName: "utilities") pod "60d03ebd-82d4-4ffe-895e-2909c15480d7" (UID: "60d03ebd-82d4-4ffe-895e-2909c15480d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.507501 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f8b2b8-8396-425a-94ac-e66deddac937-kube-api-access-cqh7x" (OuterVolumeSpecName: "kube-api-access-cqh7x") pod "a7f8b2b8-8396-425a-94ac-e66deddac937" (UID: "a7f8b2b8-8396-425a-94ac-e66deddac937"). InnerVolumeSpecName "kube-api-access-cqh7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.507911 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d03ebd-82d4-4ffe-895e-2909c15480d7-kube-api-access-c8jkp" (OuterVolumeSpecName: "kube-api-access-c8jkp") pod "60d03ebd-82d4-4ffe-895e-2909c15480d7" (UID: "60d03ebd-82d4-4ffe-895e-2909c15480d7"). InnerVolumeSpecName "kube-api-access-c8jkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.509311 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-kube-api-access-nw6hg" (OuterVolumeSpecName: "kube-api-access-nw6hg") pod "d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" (UID: "d401dd8c-5cfc-4cbd-92de-bb9896e90ea0"). InnerVolumeSpecName "kube-api-access-nw6hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.604309 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ced9400-3ad4-4717-ab18-327d5e40daa4-utilities\") pod \"8ced9400-3ad4-4717-ab18-327d5e40daa4\" (UID: \"8ced9400-3ad4-4717-ab18-327d5e40daa4\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.604407 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktvhb\" (UniqueName: \"kubernetes.io/projected/8ced9400-3ad4-4717-ab18-327d5e40daa4-kube-api-access-ktvhb\") pod \"8ced9400-3ad4-4717-ab18-327d5e40daa4\" (UID: \"8ced9400-3ad4-4717-ab18-327d5e40daa4\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.604471 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ced9400-3ad4-4717-ab18-327d5e40daa4-catalog-content\") pod \"8ced9400-3ad4-4717-ab18-327d5e40daa4\" (UID: \"8ced9400-3ad4-4717-ab18-327d5e40daa4\") " Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.604790 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f8b2b8-8396-425a-94ac-e66deddac937-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.604814 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.604828 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60d03ebd-82d4-4ffe-895e-2909c15480d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.604841 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60d03ebd-82d4-4ffe-895e-2909c15480d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.604853 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqh7x\" (UniqueName: \"kubernetes.io/projected/a7f8b2b8-8396-425a-94ac-e66deddac937-kube-api-access-cqh7x\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.604865 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.604878 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8jkp\" (UniqueName: \"kubernetes.io/projected/60d03ebd-82d4-4ffe-895e-2909c15480d7-kube-api-access-c8jkp\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.604889 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw6hg\" (UniqueName: \"kubernetes.io/projected/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0-kube-api-access-nw6hg\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.604901 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f8b2b8-8396-425a-94ac-e66deddac937-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.604810 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ced9400-3ad4-4717-ab18-327d5e40daa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ced9400-3ad4-4717-ab18-327d5e40daa4" (UID: "8ced9400-3ad4-4717-ab18-327d5e40daa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.605699 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ced9400-3ad4-4717-ab18-327d5e40daa4-utilities" (OuterVolumeSpecName: "utilities") pod "8ced9400-3ad4-4717-ab18-327d5e40daa4" (UID: "8ced9400-3ad4-4717-ab18-327d5e40daa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.607568 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ced9400-3ad4-4717-ab18-327d5e40daa4-kube-api-access-ktvhb" (OuterVolumeSpecName: "kube-api-access-ktvhb") pod "8ced9400-3ad4-4717-ab18-327d5e40daa4" (UID: "8ced9400-3ad4-4717-ab18-327d5e40daa4"). InnerVolumeSpecName "kube-api-access-ktvhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.705600 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktvhb\" (UniqueName: \"kubernetes.io/projected/8ced9400-3ad4-4717-ab18-327d5e40daa4-kube-api-access-ktvhb\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.705640 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ced9400-3ad4-4717-ab18-327d5e40daa4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.705651 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ced9400-3ad4-4717-ab18-327d5e40daa4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.755137 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8tnf"] Jan 29 15:05:26 crc kubenswrapper[4620]: W0129 15:05:26.759408 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc44039e8_f318_4ec2_bd3c_587834aa4bb8.slice/crio-466574e94f171afc8ec37e5717a32f20800885191603b54eb8a89883944af9c4 WatchSource:0}: Error finding container 466574e94f171afc8ec37e5717a32f20800885191603b54eb8a89883944af9c4: Status 404 returned error can't find the container with id 466574e94f171afc8ec37e5717a32f20800885191603b54eb8a89883944af9c4 Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.802916 4620 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.803248 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803271 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.803290 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803298 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.803310 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aaa7ad3-f4ef-4a15-993d-43166028f71b" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803322 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aaa7ad3-f4ef-4a15-993d-43166028f71b" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.803331 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803339 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.803349 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" containerName="marketplace-operator" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803356 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" containerName="marketplace-operator" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.803366 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803373 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.803385 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803393 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.803402 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803409 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.803422 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803429 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803528 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803539 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803548 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" containerName="marketplace-operator" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803559 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803568 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803578 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803589 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aaa7ad3-f4ef-4a15-993d-43166028f71b" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803599 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.803606 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" containerName="extract-utilities" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804052 4620 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804081 4620 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804191 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.804202 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804335 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.804363 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804371 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.804385 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804393 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.804402 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804409 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.804422 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804428 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.804445 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804452 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.804466 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804477 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.804498 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804505 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804746 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc" gracePeriod=15 Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804987 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.804989 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0" gracePeriod=15 Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.805015 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec" gracePeriod=15 Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.805056 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6" gracePeriod=15 Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.805003 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.805097 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670" gracePeriod=15 Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.805114 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.805137 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.805150 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.805159 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.805173 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.809441 4620 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.847046 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.879670 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aaa7ad3-f4ef-4a15-993d-43166028f71b" path="/var/lib/kubelet/pods/2aaa7ad3-f4ef-4a15-993d-43166028f71b/volumes" Jan 29 15:05:26 crc kubenswrapper[4620]: E0129 15:05:26.902878 4620 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.76:6443: connect: connection refused" event="&Event{ObjectMeta:{marketplace-operator-79b997595-f8tnf.188f3bfaca54cab9 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:marketplace-operator-79b997595-f8tnf,UID:c44039e8-f318-4ec2-bd3c-587834aa4bb8,APIVersion:v1,ResourceVersion:29988,FieldPath:spec.containers{marketplace-operator},},Reason:Created,Message:Created container marketplace-operator,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:05:26.902311609 +0000 UTC m=+267.515139254,LastTimestamp:2026-01-29 15:05:26.902311609 +0000 UTC m=+267.515139254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.912369 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.912425 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.912445 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.913336 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.913410 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.913470 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.913504 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:26 crc kubenswrapper[4620]: I0129 15:05:26.913529 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.014300 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.014807 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.014927 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.015028 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.015134 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.015342 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.014927 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.015288 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.015352 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.015532 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.015594 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.015744 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.015813 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.015777 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.014991 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.015948 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: E0129 15:05:27.053337 4620 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.76:6443: connect: connection refused" event="&Event{ObjectMeta:{marketplace-operator-79b997595-f8tnf.188f3bfaca54cab9 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:marketplace-operator-79b997595-f8tnf,UID:c44039e8-f318-4ec2-bd3c-587834aa4bb8,APIVersion:v1,ResourceVersion:29988,FieldPath:spec.containers{marketplace-operator},},Reason:Created,Message:Created container marketplace-operator,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:05:26.902311609 +0000 UTC m=+267.515139254,LastTimestamp:2026-01-29 15:05:26.902311609 +0000 UTC m=+267.515139254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.126520 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xwr75" event={"ID":"2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff","Type":"ContainerDied","Data":"40627e01719de76b933acc4bef8d86eb5d58ea9de485a55cbc01a63d942c6ea8"} Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.126882 4620 scope.go:117] "RemoveContainer" containerID="b9f420004e20579ac5d0621693eb8452ec2d9beb6aa254c1c00db93d7f0b29b0" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.126588 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xwr75" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.128100 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.128587 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.129023 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-669zb" event={"ID":"60d03ebd-82d4-4ffe-895e-2909c15480d7","Type":"ContainerDied","Data":"191136b7a1d61a5abcc8674069506d85636f4c97724afe1b72c00672db610d71"} Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.129164 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-669zb" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.129714 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.130012 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.130236 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.131615 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8bmh" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.131710 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8bmh" event={"ID":"0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2","Type":"ContainerDied","Data":"017ce28f47c8e9d6385e692d617717025f6d464ba9253a3774c124296ac3c309"} Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.132135 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.132381 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.132850 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.133154 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqw2q" event={"ID":"a7f8b2b8-8396-425a-94ac-e66deddac937","Type":"ContainerDied","Data":"b6f6c81bfa5c5d6f70a4b1f498caa9fbd5aa82cf7e477093cbf6690c029b47cd"} Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.133216 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.133292 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqw2q" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.133706 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.133958 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.134269 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.134526 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.134798 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.135088 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.135382 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.136257 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" event={"ID":"c44039e8-f318-4ec2-bd3c-587834aa4bb8","Type":"ContainerStarted","Data":"e80ddaca664a66250eaa962faa604217ca14cd6fe7f837cf6c4c159ccc5ad7c0"} Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.136347 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" event={"ID":"c44039e8-f318-4ec2-bd3c-587834aa4bb8","Type":"ContainerStarted","Data":"466574e94f171afc8ec37e5717a32f20800885191603b54eb8a89883944af9c4"} Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.136494 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.136681 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.137051 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.137953 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.138155 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.138327 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.138492 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.138816 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.139020 4620 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-f8tnf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" start-of-body= Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.139064 4620 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.139364 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.141315 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mn8xm" event={"ID":"8ced9400-3ad4-4717-ab18-327d5e40daa4","Type":"ContainerDied","Data":"0454eb1943a838e1e5e74919528a14d4a9ec62ebd2ad003ab1421642ed8d3deb"} Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.141427 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mn8xm" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.141965 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.142121 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.142286 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.142468 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.142619 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.143538 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.144212 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.144427 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" event={"ID":"7436f3bf-66e4-4314-aa0b-8af645dd5bee","Type":"ContainerDied","Data":"3c33f209131256b780b6dc73c06de9985690bdd301e90be2d856fd77b39c58fe"} Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.144464 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.145002 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.145567 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.146405 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.146592 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.146798 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.147025 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.147330 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.147590 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzcfs" event={"ID":"d401dd8c-5cfc-4cbd-92de-bb9896e90ea0","Type":"ContainerDied","Data":"0f9aed9c8828b6e20f852a5d135d30aae116eed52d516d1205d2b26df403fb02"} Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.147446 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzcfs" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.148184 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.148974 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.150078 4620 scope.go:117] "RemoveContainer" containerID="985b6cd289d66eaa145d898a232a4d6969115e51f55356fd56f8e6b069eceb63" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.150419 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.150578 4620 status_manager.go:851] "Failed to get status for pod" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" pod="openshift-marketplace/certified-operators-mzcfs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzcfs\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.150707 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn68n" event={"ID":"67739a3a-d009-4685-a79a-aaa81f5b2daf","Type":"ContainerDied","Data":"8e5f699266add6fa52314306b97113658e6aa4211884dc8febbec0cf3000099f"} Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.150849 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.150877 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn68n" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.151498 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.152313 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.152815 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.153076 4620 generic.go:334] "Generic (PLEG): container finished" podID="7a169668-f370-4ff5-bea7-162091cf8c49" containerID="bd9d886a85fb01f580ee61d82b9b21cffbf8be0d99648d1b10e8d6d2996d5312" exitCode=0 Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.153088 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7a169668-f370-4ff5-bea7-162091cf8c49","Type":"ContainerDied","Data":"bd9d886a85fb01f580ee61d82b9b21cffbf8be0d99648d1b10e8d6d2996d5312"} Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.153701 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.153947 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.154161 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.154438 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.154805 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.156128 4620 status_manager.go:851] "Failed to get status for pod" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" pod="openshift-marketplace/certified-operators-mzcfs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzcfs\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.156596 4620 status_manager.go:851] "Failed to get status for pod" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.156957 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.157411 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.157694 4620 status_manager.go:851] "Failed to get status for pod" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" pod="openshift-marketplace/community-operators-jn68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jn68n\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.158005 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.158727 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.159019 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.159134 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.159557 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.160289 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.160912 4620 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0" exitCode=0 Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.160987 4620 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec" exitCode=0 Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.161065 4620 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6" exitCode=0 Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.161127 4620 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670" exitCode=2 Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.176420 4620 scope.go:117] "RemoveContainer" containerID="2c66295885ddd88737f1d0c53ab10c4bddc1e2e00647c2b8ca4b8c1ec65be911" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.193671 4620 scope.go:117] "RemoveContainer" containerID="f226daf90f70d8f8499c4862141bc4e37ee135f9a4207cee4c45d1914b6d8410" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.209925 4620 scope.go:117] "RemoveContainer" containerID="75924686e6fbad9a59207802306d07d25f7e2c69cf511744e83bd917d018dc04" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.233669 4620 scope.go:117] "RemoveContainer" containerID="b54ea343b2ffad95c7a0f0fb31efb8d3f4d9e71ef55fe8c545e2c3932946b840" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.263897 4620 scope.go:117] "RemoveContainer" containerID="98c519773309d38c7105fc9e21e1893fc56d3495d84172f52e3f71bcd2105f41" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.277196 4620 scope.go:117] "RemoveContainer" containerID="dc1a5f100e79466a9ee303d85d7acb2ea5bbfe60a971d6da5d9f1280672c59fc" Jan 29 15:05:27 crc kubenswrapper[4620]: I0129 15:05:27.293853 4620 scope.go:117] "RemoveContainer" containerID="aeeba9ab16090f807c69a11a6cbe105678e0df8e77d6dd42b23eece9b0f9a40f" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.171422 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.174421 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6819a3f616c654bbeaae8b2467be9e13f8ffb404bf563db4646b746c1eb3588d"} Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.174457 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e0ffc9f4b2b930024d63a3398135af869bbb3338e047228e61435a0a4ef946f8"} Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.175108 4620 status_manager.go:851] "Failed to get status for pod" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.175432 4620 status_manager.go:851] "Failed to get status for pod" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" pod="openshift-marketplace/certified-operators-mzcfs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzcfs\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.175768 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.175987 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.176320 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.176693 4620 status_manager.go:851] "Failed to get status for pod" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" pod="openshift-marketplace/community-operators-jn68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jn68n\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.176925 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.177163 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.177397 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.177589 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.177913 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.178955 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f8tnf_c44039e8-f318-4ec2-bd3c-587834aa4bb8/marketplace-operator/0.log" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.179019 4620 generic.go:334] "Generic (PLEG): container finished" podID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" containerID="e80ddaca664a66250eaa962faa604217ca14cd6fe7f837cf6c4c159ccc5ad7c0" exitCode=1 Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.179049 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" event={"ID":"c44039e8-f318-4ec2-bd3c-587834aa4bb8","Type":"ContainerDied","Data":"e80ddaca664a66250eaa962faa604217ca14cd6fe7f837cf6c4c159ccc5ad7c0"} Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.179539 4620 scope.go:117] "RemoveContainer" containerID="e80ddaca664a66250eaa962faa604217ca14cd6fe7f837cf6c4c159ccc5ad7c0" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.179616 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.179980 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.180194 4620 status_manager.go:851] "Failed to get status for pod" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" pod="openshift-marketplace/certified-operators-mzcfs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzcfs\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.180428 4620 status_manager.go:851] "Failed to get status for pod" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.180689 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.180946 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.181450 4620 status_manager.go:851] "Failed to get status for pod" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" pod="openshift-marketplace/community-operators-jn68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jn68n\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.181673 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.181897 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.182093 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.182283 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.500593 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.501354 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.501769 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.502114 4620 status_manager.go:851] "Failed to get status for pod" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" pod="openshift-marketplace/certified-operators-mzcfs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzcfs\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.502404 4620 status_manager.go:851] "Failed to get status for pod" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.502791 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.503222 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.503704 4620 status_manager.go:851] "Failed to get status for pod" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" pod="openshift-marketplace/community-operators-jn68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jn68n\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.504060 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.504295 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.504542 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.504850 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.543042 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a169668-f370-4ff5-bea7-162091cf8c49-kubelet-dir\") pod \"7a169668-f370-4ff5-bea7-162091cf8c49\" (UID: \"7a169668-f370-4ff5-bea7-162091cf8c49\") " Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.543147 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a169668-f370-4ff5-bea7-162091cf8c49-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7a169668-f370-4ff5-bea7-162091cf8c49" (UID: "7a169668-f370-4ff5-bea7-162091cf8c49"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.543626 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a169668-f370-4ff5-bea7-162091cf8c49-var-lock\") pod \"7a169668-f370-4ff5-bea7-162091cf8c49\" (UID: \"7a169668-f370-4ff5-bea7-162091cf8c49\") " Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.543707 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a169668-f370-4ff5-bea7-162091cf8c49-var-lock" (OuterVolumeSpecName: "var-lock") pod "7a169668-f370-4ff5-bea7-162091cf8c49" (UID: "7a169668-f370-4ff5-bea7-162091cf8c49"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.543749 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a169668-f370-4ff5-bea7-162091cf8c49-kube-api-access\") pod \"7a169668-f370-4ff5-bea7-162091cf8c49\" (UID: \"7a169668-f370-4ff5-bea7-162091cf8c49\") " Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.544928 4620 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a169668-f370-4ff5-bea7-162091cf8c49-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.544944 4620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a169668-f370-4ff5-bea7-162091cf8c49-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.549004 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a169668-f370-4ff5-bea7-162091cf8c49-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7a169668-f370-4ff5-bea7-162091cf8c49" (UID: "7a169668-f370-4ff5-bea7-162091cf8c49"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:05:28 crc kubenswrapper[4620]: I0129 15:05:28.645947 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a169668-f370-4ff5-bea7-162091cf8c49-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.190830 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f8tnf_c44039e8-f318-4ec2-bd3c-587834aa4bb8/marketplace-operator/1.log" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.191629 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f8tnf_c44039e8-f318-4ec2-bd3c-587834aa4bb8/marketplace-operator/0.log" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.191673 4620 generic.go:334] "Generic (PLEG): container finished" podID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" containerID="fab0c6ae1ad79ab540bbf36f0baf0509a2cfa1933370a5b2ef8cc6b30cd063d4" exitCode=1 Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.191732 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" event={"ID":"c44039e8-f318-4ec2-bd3c-587834aa4bb8","Type":"ContainerDied","Data":"fab0c6ae1ad79ab540bbf36f0baf0509a2cfa1933370a5b2ef8cc6b30cd063d4"} Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.191788 4620 scope.go:117] "RemoveContainer" containerID="e80ddaca664a66250eaa962faa604217ca14cd6fe7f837cf6c4c159ccc5ad7c0" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.192301 4620 scope.go:117] "RemoveContainer" containerID="fab0c6ae1ad79ab540bbf36f0baf0509a2cfa1933370a5b2ef8cc6b30cd063d4" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.192439 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: E0129 15:05:29.192628 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-f8tnf_openshift-marketplace(c44039e8-f318-4ec2-bd3c-587834aa4bb8)\"" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.192913 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.193300 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.193515 4620 status_manager.go:851] "Failed to get status for pod" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" pod="openshift-marketplace/certified-operators-mzcfs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzcfs\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.193779 4620 status_manager.go:851] "Failed to get status for pod" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.194037 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.194249 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.194488 4620 status_manager.go:851] "Failed to get status for pod" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" pod="openshift-marketplace/community-operators-jn68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jn68n\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.194709 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.194930 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.195192 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.198804 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.198993 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7a169668-f370-4ff5-bea7-162091cf8c49","Type":"ContainerDied","Data":"4a62ba3b776558d9aac9cc1e2c878503046453130ef1b188ec87b56d6039f3cd"} Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.199041 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a62ba3b776558d9aac9cc1e2c878503046453130ef1b188ec87b56d6039f3cd" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.218349 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.218780 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.219008 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.219221 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.219401 4620 status_manager.go:851] "Failed to get status for pod" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.219671 4620 status_manager.go:851] "Failed to get status for pod" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" pod="openshift-marketplace/certified-operators-mzcfs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzcfs\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.219887 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.220080 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.220277 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.220454 4620 status_manager.go:851] "Failed to get status for pod" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" pod="openshift-marketplace/community-operators-jn68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jn68n\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.220635 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.364122 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.364921 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.365457 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.365660 4620 status_manager.go:851] "Failed to get status for pod" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" pod="openshift-marketplace/certified-operators-mzcfs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzcfs\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.365882 4620 status_manager.go:851] "Failed to get status for pod" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.366079 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: E0129 15:05:29.366146 4620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.366264 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: E0129 15:05:29.366383 4620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: E0129 15:05:29.366642 4620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: E0129 15:05:29.366883 4620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: E0129 15:05:29.367144 4620 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.367247 4620 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 15:05:29 crc kubenswrapper[4620]: E0129 15:05:29.367566 4620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.76:6443: connect: connection refused" interval="200ms" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.367975 4620 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.368337 4620 status_manager.go:851] "Failed to get status for pod" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" pod="openshift-marketplace/community-operators-jn68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jn68n\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.368704 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.369093 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.369410 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.369754 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.370089 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.459317 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.459742 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.459987 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.459998 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.460447 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.460543 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.461238 4620 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.461273 4620 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:29 crc kubenswrapper[4620]: I0129 15:05:29.461281 4620 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:05:29 crc kubenswrapper[4620]: E0129 15:05:29.569021 4620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.76:6443: connect: connection refused" interval="400ms" Jan 29 15:05:29 crc kubenswrapper[4620]: E0129 15:05:29.970265 4620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.76:6443: connect: connection refused" interval="800ms" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.204622 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f8tnf_c44039e8-f318-4ec2-bd3c-587834aa4bb8/marketplace-operator/1.log" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.205394 4620 scope.go:117] "RemoveContainer" containerID="fab0c6ae1ad79ab540bbf36f0baf0509a2cfa1933370a5b2ef8cc6b30cd063d4" Jan 29 15:05:30 crc kubenswrapper[4620]: E0129 15:05:30.205812 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-f8tnf_openshift-marketplace(c44039e8-f318-4ec2-bd3c-587834aa4bb8)\"" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.205942 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.206325 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.206593 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.207320 4620 status_manager.go:851] "Failed to get status for pod" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" pod="openshift-marketplace/certified-operators-mzcfs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzcfs\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.207652 4620 status_manager.go:851] "Failed to get status for pod" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.207956 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.208295 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.208536 4620 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.208729 4620 status_manager.go:851] "Failed to get status for pod" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" pod="openshift-marketplace/community-operators-jn68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jn68n\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.208937 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.209105 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.209171 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.209502 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.209961 4620 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc" exitCode=0 Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.210000 4620 scope.go:117] "RemoveContainer" containerID="1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.210111 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.222280 4620 scope.go:117] "RemoveContainer" containerID="1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.229885 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.233022 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.233060 4620 scope.go:117] "RemoveContainer" containerID="d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.233368 4620 status_manager.go:851] "Failed to get status for pod" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" pod="openshift-marketplace/certified-operators-mzcfs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzcfs\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.233628 4620 status_manager.go:851] "Failed to get status for pod" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.234312 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.234539 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.234891 4620 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.235155 4620 status_manager.go:851] "Failed to get status for pod" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" pod="openshift-marketplace/community-operators-jn68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jn68n\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.235747 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.236236 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.236575 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.236809 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.245266 4620 scope.go:117] "RemoveContainer" containerID="2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.260026 4620 scope.go:117] "RemoveContainer" containerID="9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.283376 4620 scope.go:117] "RemoveContainer" containerID="ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.306027 4620 scope.go:117] "RemoveContainer" containerID="1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0" Jan 29 15:05:30 crc kubenswrapper[4620]: E0129 15:05:30.306595 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\": container with ID starting with 1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0 not found: ID does not exist" containerID="1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.306707 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0"} err="failed to get container status \"1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\": rpc error: code = NotFound desc = could not find container \"1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0\": container with ID starting with 1e3636c1a2d1f6c8e33b23244158e1b50dcd183567f291344936cc2cd20198e0 not found: ID does not exist" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.306884 4620 scope.go:117] "RemoveContainer" containerID="1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec" Jan 29 15:05:30 crc kubenswrapper[4620]: E0129 15:05:30.307606 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\": container with ID starting with 1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec not found: ID does not exist" containerID="1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.307703 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec"} err="failed to get container status \"1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\": rpc error: code = NotFound desc = could not find container \"1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec\": container with ID starting with 1e5f5e4d983a38ffd1ff0d2ee1b0c1d8eda12bd443ec9e0c6bb0419558526bec not found: ID does not exist" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.307785 4620 scope.go:117] "RemoveContainer" containerID="d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6" Jan 29 15:05:30 crc kubenswrapper[4620]: E0129 15:05:30.308148 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\": container with ID starting with d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6 not found: ID does not exist" containerID="d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.308254 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6"} err="failed to get container status \"d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\": rpc error: code = NotFound desc = could not find container \"d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6\": container with ID starting with d1cd2e2e173ad5f57d7d0c34128d6744239e0c858855e46d6e51f4c5cb5ff5f6 not found: ID does not exist" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.308354 4620 scope.go:117] "RemoveContainer" containerID="2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670" Jan 29 15:05:30 crc kubenswrapper[4620]: E0129 15:05:30.309488 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\": container with ID starting with 2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670 not found: ID does not exist" containerID="2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.309569 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670"} err="failed to get container status \"2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\": rpc error: code = NotFound desc = could not find container \"2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670\": container with ID starting with 2622783dae8639445659de58a6cf2954be1444460983b99ed97490735ad1b670 not found: ID does not exist" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.309651 4620 scope.go:117] "RemoveContainer" containerID="9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc" Jan 29 15:05:30 crc kubenswrapper[4620]: E0129 15:05:30.309975 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\": container with ID starting with 9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc not found: ID does not exist" containerID="9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.310079 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc"} err="failed to get container status \"9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\": rpc error: code = NotFound desc = could not find container \"9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc\": container with ID starting with 9a6c4925324657d3520c5f98ad4688ec0c337249633d9ff0b974a5954fb678cc not found: ID does not exist" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.310177 4620 scope.go:117] "RemoveContainer" containerID="ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc" Jan 29 15:05:30 crc kubenswrapper[4620]: E0129 15:05:30.310451 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\": container with ID starting with ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc not found: ID does not exist" containerID="ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.310525 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc"} err="failed to get container status \"ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\": rpc error: code = NotFound desc = could not find container \"ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc\": container with ID starting with ec5e5126b3ce0e2b053cf302bcb2a4292a9d437ebc20ba3708a2a11ea7af1edc not found: ID does not exist" Jan 29 15:05:30 crc kubenswrapper[4620]: E0129 15:05:30.771348 4620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.76:6443: connect: connection refused" interval="1.6s" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.874908 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.875267 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.875612 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.876111 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.876405 4620 status_manager.go:851] "Failed to get status for pod" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" pod="openshift-marketplace/certified-operators-mzcfs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzcfs\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.876678 4620 status_manager.go:851] "Failed to get status for pod" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.877003 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.877296 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.877612 4620 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.877938 4620 status_manager.go:851] "Failed to get status for pod" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" pod="openshift-marketplace/community-operators-jn68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jn68n\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.878194 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.878513 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:30 crc kubenswrapper[4620]: I0129 15:05:30.881097 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 15:05:32 crc kubenswrapper[4620]: E0129 15:05:32.373473 4620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.76:6443: connect: connection refused" interval="3.2s" Jan 29 15:05:34 crc kubenswrapper[4620]: E0129 15:05:34.915683 4620 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.76:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" volumeName="registry-storage" Jan 29 15:05:35 crc kubenswrapper[4620]: E0129 15:05:35.574442 4620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.76:6443: connect: connection refused" interval="6.4s" Jan 29 15:05:35 crc kubenswrapper[4620]: I0129 15:05:35.784591 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:35 crc kubenswrapper[4620]: I0129 15:05:35.784655 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:35 crc kubenswrapper[4620]: I0129 15:05:35.785094 4620 scope.go:117] "RemoveContainer" containerID="fab0c6ae1ad79ab540bbf36f0baf0509a2cfa1933370a5b2ef8cc6b30cd063d4" Jan 29 15:05:35 crc kubenswrapper[4620]: E0129 15:05:35.785337 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-f8tnf_openshift-marketplace(c44039e8-f318-4ec2-bd3c-587834aa4bb8)\"" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" Jan 29 15:05:37 crc kubenswrapper[4620]: E0129 15:05:37.054181 4620 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.76:6443: connect: connection refused" event="&Event{ObjectMeta:{marketplace-operator-79b997595-f8tnf.188f3bfaca54cab9 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:marketplace-operator-79b997595-f8tnf,UID:c44039e8-f318-4ec2-bd3c-587834aa4bb8,APIVersion:v1,ResourceVersion:29988,FieldPath:spec.containers{marketplace-operator},},Reason:Created,Message:Created container marketplace-operator,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:05:26.902311609 +0000 UTC m=+267.515139254,LastTimestamp:2026-01-29 15:05:26.902311609 +0000 UTC m=+267.515139254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.872540 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.873386 4620 status_manager.go:851] "Failed to get status for pod" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" pod="openshift-marketplace/community-operators-jn68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jn68n\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.873857 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.874137 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.874322 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.874514 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.874689 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.874888 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.875075 4620 status_manager.go:851] "Failed to get status for pod" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" pod="openshift-marketplace/certified-operators-mzcfs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzcfs\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.875270 4620 status_manager.go:851] "Failed to get status for pod" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.875464 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.875651 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.892773 4620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7cb05bef-ef02-4ac8-8d18-4d13ded49beb" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.893012 4620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7cb05bef-ef02-4ac8-8d18-4d13ded49beb" Jan 29 15:05:37 crc kubenswrapper[4620]: E0129 15:05:37.893788 4620 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:37 crc kubenswrapper[4620]: I0129 15:05:37.894422 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:38 crc kubenswrapper[4620]: I0129 15:05:38.254605 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b72f815e4ae96e85afd25e99cf22727bd16cd0a4ea9be5284f9164e2d6f1eaff"} Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.260502 4620 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="dfa438eebd07d1aecc05bb290b39fb5f5e9ca31462a7646071262e6d43d1e22c" exitCode=0 Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.260549 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"dfa438eebd07d1aecc05bb290b39fb5f5e9ca31462a7646071262e6d43d1e22c"} Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.260776 4620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7cb05bef-ef02-4ac8-8d18-4d13ded49beb" Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.260840 4620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7cb05bef-ef02-4ac8-8d18-4d13ded49beb" Jan 29 15:05:39 crc kubenswrapper[4620]: E0129 15:05:39.261275 4620 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.261337 4620 status_manager.go:851] "Failed to get status for pod" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" pod="openshift-marketplace/community-operators-jn68n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jn68n\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.261876 4620 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.262130 4620 status_manager.go:851] "Failed to get status for pod" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" pod="openshift-marketplace/redhat-operators-tqw2q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqw2q\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.263701 4620 status_manager.go:851] "Failed to get status for pod" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" pod="openshift-marketplace/marketplace-operator-79b997595-zsh8m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-zsh8m\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.264038 4620 status_manager.go:851] "Failed to get status for pod" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" pod="openshift-marketplace/redhat-marketplace-xwr75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xwr75\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.264291 4620 status_manager.go:851] "Failed to get status for pod" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" pod="openshift-marketplace/redhat-operators-669zb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-669zb\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.264625 4620 status_manager.go:851] "Failed to get status for pod" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-f8tnf\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.264887 4620 status_manager.go:851] "Failed to get status for pod" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" pod="openshift-marketplace/certified-operators-mzcfs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mzcfs\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.265734 4620 status_manager.go:851] "Failed to get status for pod" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.266066 4620 status_manager.go:851] "Failed to get status for pod" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" pod="openshift-marketplace/redhat-marketplace-mn8xm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mn8xm\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:39 crc kubenswrapper[4620]: I0129 15:05:39.266281 4620 status_manager.go:851] "Failed to get status for pod" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" pod="openshift-marketplace/community-operators-l8bmh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-l8bmh\": dial tcp 38.129.56.76:6443: connect: connection refused" Jan 29 15:05:40 crc kubenswrapper[4620]: I0129 15:05:40.274103 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c09f906806e866991b64cf971f468f4942ba61db5249fe64fcd194a953c3bbc"} Jan 29 15:05:40 crc kubenswrapper[4620]: I0129 15:05:40.274404 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"da7bbfc6f1cc846a80714df1f6e5bbd3a07c390692737051e097cb3f986f9940"} Jan 29 15:05:41 crc kubenswrapper[4620]: I0129 15:05:41.282369 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"17d7d6efeba94615b6e521874ea284d59e5386717b49e008de2d1ac93f0519cf"} Jan 29 15:05:41 crc kubenswrapper[4620]: I0129 15:05:41.282425 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5fbe540fa595e1f41cfedbd209053ca0652c863e7d72b478995674116950b614"} Jan 29 15:05:41 crc kubenswrapper[4620]: I0129 15:05:41.282441 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"882b2b5fbb48b11a07c94213587cf2fbb35ba95283b8315d1ffe8382c959098a"} Jan 29 15:05:41 crc kubenswrapper[4620]: I0129 15:05:41.282551 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:41 crc kubenswrapper[4620]: I0129 15:05:41.282633 4620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7cb05bef-ef02-4ac8-8d18-4d13ded49beb" Jan 29 15:05:41 crc kubenswrapper[4620]: I0129 15:05:41.282651 4620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7cb05bef-ef02-4ac8-8d18-4d13ded49beb" Jan 29 15:05:41 crc kubenswrapper[4620]: I0129 15:05:41.285659 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 15:05:41 crc kubenswrapper[4620]: I0129 15:05:41.285705 4620 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa" exitCode=1 Jan 29 15:05:41 crc kubenswrapper[4620]: I0129 15:05:41.285728 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa"} Jan 29 15:05:41 crc kubenswrapper[4620]: I0129 15:05:41.286165 4620 scope.go:117] "RemoveContainer" containerID="e6c69de20649ee42faedd69d770655c05ec6371449944945da10b987ab07ecfa" Jan 29 15:05:42 crc kubenswrapper[4620]: I0129 15:05:42.294482 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 15:05:42 crc kubenswrapper[4620]: I0129 15:05:42.294859 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e30506068416c5e6c458a1c43deb30603e88fe6d4311a7ab3274a39ca668ee10"} Jan 29 15:05:42 crc kubenswrapper[4620]: I0129 15:05:42.894625 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:42 crc kubenswrapper[4620]: I0129 15:05:42.894667 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:42 crc kubenswrapper[4620]: I0129 15:05:42.900113 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.244800 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.247359 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.256556 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.751482 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.751588 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.751683 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.753359 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.754770 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.763364 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.767157 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.775854 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.775943 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.801942 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.817338 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:05:43 crc kubenswrapper[4620]: I0129 15:05:43.989252 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:05:44 crc kubenswrapper[4620]: W0129 15:05:44.229345 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-bc172a50c1244a0af2d2a377a234a8919961149f0522c50089a4cdae8ea3e084 WatchSource:0}: Error finding container bc172a50c1244a0af2d2a377a234a8919961149f0522c50089a4cdae8ea3e084: Status 404 returned error can't find the container with id bc172a50c1244a0af2d2a377a234a8919961149f0522c50089a4cdae8ea3e084 Jan 29 15:05:44 crc kubenswrapper[4620]: I0129 15:05:44.304830 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bc172a50c1244a0af2d2a377a234a8919961149f0522c50089a4cdae8ea3e084"} Jan 29 15:05:44 crc kubenswrapper[4620]: W0129 15:05:44.381505 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-1ed9b290f51b19f3cef6adcff211eeb86f3c3326977f5244c6d97c600d3622e4 WatchSource:0}: Error finding container 1ed9b290f51b19f3cef6adcff211eeb86f3c3326977f5244c6d97c600d3622e4: Status 404 returned error can't find the container with id 1ed9b290f51b19f3cef6adcff211eeb86f3c3326977f5244c6d97c600d3622e4 Jan 29 15:05:45 crc kubenswrapper[4620]: I0129 15:05:45.311175 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"16c341fa6bcec1628c3ea0f4d578245b8abcb5c935ff063afc6f4fa9584f90d9"} Jan 29 15:05:45 crc kubenswrapper[4620]: I0129 15:05:45.311472 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1ed9b290f51b19f3cef6adcff211eeb86f3c3326977f5244c6d97c600d3622e4"} Jan 29 15:05:45 crc kubenswrapper[4620]: I0129 15:05:45.311655 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:05:45 crc kubenswrapper[4620]: I0129 15:05:45.313660 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0742158ed53017ed3785f88c4ba1991d01438a2bbd3bf699bd8ee7153575fc88"} Jan 29 15:05:45 crc kubenswrapper[4620]: I0129 15:05:45.313710 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a47c4b9cb80071e2856851c253e254106557c7a1f45d51d5966d89ad4a9de355"} Jan 29 15:05:45 crc kubenswrapper[4620]: I0129 15:05:45.315007 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f4f564f5e8bacca5426439efb09372840372f56515e5a8dc4bf2f715f8f38b3b"} Jan 29 15:05:45 crc kubenswrapper[4620]: I0129 15:05:45.730942 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:05:45 crc kubenswrapper[4620]: I0129 15:05:45.734555 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:05:46 crc kubenswrapper[4620]: I0129 15:05:46.321117 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 29 15:05:46 crc kubenswrapper[4620]: I0129 15:05:46.321442 4620 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="0742158ed53017ed3785f88c4ba1991d01438a2bbd3bf699bd8ee7153575fc88" exitCode=255 Jan 29 15:05:46 crc kubenswrapper[4620]: I0129 15:05:46.321538 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"0742158ed53017ed3785f88c4ba1991d01438a2bbd3bf699bd8ee7153575fc88"} Jan 29 15:05:46 crc kubenswrapper[4620]: I0129 15:05:46.321955 4620 scope.go:117] "RemoveContainer" containerID="0742158ed53017ed3785f88c4ba1991d01438a2bbd3bf699bd8ee7153575fc88" Jan 29 15:05:46 crc kubenswrapper[4620]: I0129 15:05:46.322204 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:05:46 crc kubenswrapper[4620]: I0129 15:05:46.352527 4620 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:46 crc kubenswrapper[4620]: I0129 15:05:46.398630 4620 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6ae7ba65-68ce-48db-944f-855a2470fb14" Jan 29 15:05:46 crc kubenswrapper[4620]: I0129 15:05:46.872872 4620 scope.go:117] "RemoveContainer" containerID="fab0c6ae1ad79ab540bbf36f0baf0509a2cfa1933370a5b2ef8cc6b30cd063d4" Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.327299 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.328027 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.328067 4620 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="8485cc1bf8275af1aaeacfb40a57ec6b597002d7d0fa565c19dbf0083b7eae7e" exitCode=255 Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.328117 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"8485cc1bf8275af1aaeacfb40a57ec6b597002d7d0fa565c19dbf0083b7eae7e"} Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.328147 4620 scope.go:117] "RemoveContainer" containerID="0742158ed53017ed3785f88c4ba1991d01438a2bbd3bf699bd8ee7153575fc88" Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.328463 4620 scope.go:117] "RemoveContainer" containerID="8485cc1bf8275af1aaeacfb40a57ec6b597002d7d0fa565c19dbf0083b7eae7e" Jan 29 15:05:47 crc kubenswrapper[4620]: E0129 15:05:47.328699 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.330240 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f8tnf_c44039e8-f318-4ec2-bd3c-587834aa4bb8/marketplace-operator/2.log" Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.330880 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f8tnf_c44039e8-f318-4ec2-bd3c-587834aa4bb8/marketplace-operator/1.log" Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.330949 4620 generic.go:334] "Generic (PLEG): container finished" podID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" containerID="6699b6a193e6ba9c3b7779a4f0a151100c78139525a75ac23517011a7e61efeb" exitCode=1 Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.332045 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" event={"ID":"c44039e8-f318-4ec2-bd3c-587834aa4bb8","Type":"ContainerDied","Data":"6699b6a193e6ba9c3b7779a4f0a151100c78139525a75ac23517011a7e61efeb"} Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.332452 4620 scope.go:117] "RemoveContainer" containerID="6699b6a193e6ba9c3b7779a4f0a151100c78139525a75ac23517011a7e61efeb" Jan 29 15:05:47 crc kubenswrapper[4620]: E0129 15:05:47.332703 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-f8tnf_openshift-marketplace(c44039e8-f318-4ec2-bd3c-587834aa4bb8)\"" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.333405 4620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7cb05bef-ef02-4ac8-8d18-4d13ded49beb" Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.333425 4620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7cb05bef-ef02-4ac8-8d18-4d13ded49beb" Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.340928 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.349285 4620 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6ae7ba65-68ce-48db-944f-855a2470fb14" Jan 29 15:05:47 crc kubenswrapper[4620]: I0129 15:05:47.398230 4620 scope.go:117] "RemoveContainer" containerID="fab0c6ae1ad79ab540bbf36f0baf0509a2cfa1933370a5b2ef8cc6b30cd063d4" Jan 29 15:05:48 crc kubenswrapper[4620]: I0129 15:05:48.338238 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 29 15:05:48 crc kubenswrapper[4620]: I0129 15:05:48.340683 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f8tnf_c44039e8-f318-4ec2-bd3c-587834aa4bb8/marketplace-operator/2.log" Jan 29 15:05:48 crc kubenswrapper[4620]: I0129 15:05:48.341007 4620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7cb05bef-ef02-4ac8-8d18-4d13ded49beb" Jan 29 15:05:48 crc kubenswrapper[4620]: I0129 15:05:48.341032 4620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7cb05bef-ef02-4ac8-8d18-4d13ded49beb" Jan 29 15:05:48 crc kubenswrapper[4620]: I0129 15:05:48.343830 4620 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6ae7ba65-68ce-48db-944f-855a2470fb14" Jan 29 15:05:55 crc kubenswrapper[4620]: I0129 15:05:55.785235 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:55 crc kubenswrapper[4620]: I0129 15:05:55.785980 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:05:55 crc kubenswrapper[4620]: I0129 15:05:55.786878 4620 scope.go:117] "RemoveContainer" containerID="6699b6a193e6ba9c3b7779a4f0a151100c78139525a75ac23517011a7e61efeb" Jan 29 15:05:55 crc kubenswrapper[4620]: E0129 15:05:55.787147 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-f8tnf_openshift-marketplace(c44039e8-f318-4ec2-bd3c-587834aa4bb8)\"" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" podUID="c44039e8-f318-4ec2-bd3c-587834aa4bb8" Jan 29 15:05:56 crc kubenswrapper[4620]: I0129 15:05:56.354673 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 15:05:56 crc kubenswrapper[4620]: I0129 15:05:56.579252 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 15:05:56 crc kubenswrapper[4620]: I0129 15:05:56.764278 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 15:05:57 crc kubenswrapper[4620]: I0129 15:05:57.308302 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 15:05:57 crc kubenswrapper[4620]: I0129 15:05:57.784987 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 15:05:58 crc kubenswrapper[4620]: I0129 15:05:58.148777 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 15:05:58 crc kubenswrapper[4620]: I0129 15:05:58.607993 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:05:58 crc kubenswrapper[4620]: I0129 15:05:58.743173 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 15:05:58 crc kubenswrapper[4620]: I0129 15:05:58.774590 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 15:05:58 crc kubenswrapper[4620]: I0129 15:05:58.781782 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 15:05:59 crc kubenswrapper[4620]: I0129 15:05:59.071119 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:05:59 crc kubenswrapper[4620]: I0129 15:05:59.264266 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 15:05:59 crc kubenswrapper[4620]: I0129 15:05:59.284492 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 15:05:59 crc kubenswrapper[4620]: I0129 15:05:59.333412 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 15:05:59 crc kubenswrapper[4620]: I0129 15:05:59.551917 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 15:05:59 crc kubenswrapper[4620]: I0129 15:05:59.696241 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 15:05:59 crc kubenswrapper[4620]: I0129 15:05:59.801741 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:05:59 crc kubenswrapper[4620]: I0129 15:05:59.805337 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 15:05:59 crc kubenswrapper[4620]: I0129 15:05:59.889695 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 15:06:00 crc kubenswrapper[4620]: I0129 15:06:00.236383 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 15:06:00 crc kubenswrapper[4620]: I0129 15:06:00.262040 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 15:06:00 crc kubenswrapper[4620]: I0129 15:06:00.403559 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 15:06:00 crc kubenswrapper[4620]: I0129 15:06:00.439972 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 15:06:00 crc kubenswrapper[4620]: I0129 15:06:00.457518 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 15:06:00 crc kubenswrapper[4620]: I0129 15:06:00.513433 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 15:06:00 crc kubenswrapper[4620]: I0129 15:06:00.534381 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 15:06:00 crc kubenswrapper[4620]: I0129 15:06:00.543189 4620 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 15:06:00 crc kubenswrapper[4620]: I0129 15:06:00.605393 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 15:06:00 crc kubenswrapper[4620]: I0129 15:06:00.607180 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 15:06:00 crc kubenswrapper[4620]: I0129 15:06:00.671267 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 15:06:00 crc kubenswrapper[4620]: I0129 15:06:00.881302 4620 scope.go:117] "RemoveContainer" containerID="8485cc1bf8275af1aaeacfb40a57ec6b597002d7d0fa565c19dbf0083b7eae7e" Jan 29 15:06:01 crc kubenswrapper[4620]: I0129 15:06:01.018944 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 15:06:01 crc kubenswrapper[4620]: I0129 15:06:01.021831 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 15:06:01 crc kubenswrapper[4620]: I0129 15:06:01.190351 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 15:06:01 crc kubenswrapper[4620]: I0129 15:06:01.408905 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 29 15:06:01 crc kubenswrapper[4620]: I0129 15:06:01.408984 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"99efac41da5be52a6a1960a76a5f9b787fa7d8ea028e2c14e38289088328875e"} Jan 29 15:06:01 crc kubenswrapper[4620]: I0129 15:06:01.446907 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 15:06:01 crc kubenswrapper[4620]: I0129 15:06:01.465424 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 15:06:01 crc kubenswrapper[4620]: I0129 15:06:01.533924 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 15:06:01 crc kubenswrapper[4620]: I0129 15:06:01.701201 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 15:06:01 crc kubenswrapper[4620]: I0129 15:06:01.871078 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 15:06:01 crc kubenswrapper[4620]: I0129 15:06:01.907069 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.005281 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.023252 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.039520 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.070862 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.097436 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.097812 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.393146 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.415240 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.415957 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.416016 4620 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="99efac41da5be52a6a1960a76a5f9b787fa7d8ea028e2c14e38289088328875e" exitCode=255 Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.416054 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"99efac41da5be52a6a1960a76a5f9b787fa7d8ea028e2c14e38289088328875e"} Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.416099 4620 scope.go:117] "RemoveContainer" containerID="8485cc1bf8275af1aaeacfb40a57ec6b597002d7d0fa565c19dbf0083b7eae7e" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.416470 4620 scope.go:117] "RemoveContainer" containerID="99efac41da5be52a6a1960a76a5f9b787fa7d8ea028e2c14e38289088328875e" Jan 29 15:06:02 crc kubenswrapper[4620]: E0129 15:06:02.416678 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.423932 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.445112 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.512297 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.534390 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.546951 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.590269 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.648909 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.672142 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.688519 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 15:06:02 crc kubenswrapper[4620]: I0129 15:06:02.750993 4620 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.024530 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.048323 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.050437 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.106604 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.121895 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.158727 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.194422 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.274566 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.323587 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.422634 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.516177 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.516489 4620 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.583462 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.591042 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.639888 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.687848 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.757131 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.777629 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.813827 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.929807 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 15:06:03 crc kubenswrapper[4620]: I0129 15:06:03.957943 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 15:06:04 crc kubenswrapper[4620]: I0129 15:06:04.030994 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 15:06:04 crc kubenswrapper[4620]: I0129 15:06:04.072162 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 15:06:04 crc kubenswrapper[4620]: I0129 15:06:04.119744 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 15:06:04 crc kubenswrapper[4620]: I0129 15:06:04.193346 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 15:06:04 crc kubenswrapper[4620]: I0129 15:06:04.277933 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 15:06:04 crc kubenswrapper[4620]: I0129 15:06:04.328416 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 15:06:04 crc kubenswrapper[4620]: I0129 15:06:04.475313 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 15:06:04 crc kubenswrapper[4620]: I0129 15:06:04.589365 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 15:06:04 crc kubenswrapper[4620]: I0129 15:06:04.604714 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 15:06:04 crc kubenswrapper[4620]: I0129 15:06:04.780820 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 15:06:04 crc kubenswrapper[4620]: I0129 15:06:04.824023 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 15:06:04 crc kubenswrapper[4620]: I0129 15:06:04.871670 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 15:06:04 crc kubenswrapper[4620]: I0129 15:06:04.904441 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.040089 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.050080 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.074698 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.088160 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.094899 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.151874 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.182181 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.236657 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.251439 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.548121 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.598247 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.631806 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.640230 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.652582 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.680011 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.684025 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.755142 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.796107 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.800154 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.857690 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 15:06:05 crc kubenswrapper[4620]: I0129 15:06:05.903476 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.035083 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.088929 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.230247 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.248924 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.345067 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.358299 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.372068 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.401496 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.416667 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.433783 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.440656 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.550531 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.634745 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.665309 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.701903 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.703729 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.756748 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.777889 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.781809 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.823517 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.912075 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.946154 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.976591 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 15:06:06 crc kubenswrapper[4620]: I0129 15:06:06.988410 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.000332 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.029278 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.084040 4620 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.133722 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.247903 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.250727 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.265602 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.319261 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.336537 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.367927 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.435487 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.465988 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.466390 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.548123 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.872018 4620 scope.go:117] "RemoveContainer" containerID="6699b6a193e6ba9c3b7779a4f0a151100c78139525a75ac23517011a7e61efeb" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.921036 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.932970 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.945719 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.974190 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 15:06:07 crc kubenswrapper[4620]: I0129 15:06:07.990746 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.012642 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.020571 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.221588 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.306705 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.433505 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.448898 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.450557 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f8tnf_c44039e8-f318-4ec2-bd3c-587834aa4bb8/marketplace-operator/2.log" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.450614 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" event={"ID":"c44039e8-f318-4ec2-bd3c-587834aa4bb8","Type":"ContainerStarted","Data":"2708bec441b04f8dc5e7ac4b50ef3063a6dbd33acb5cfd56326ad99f8e8b0493"} Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.450924 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.453125 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.523889 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.642164 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.747791 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.755365 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.838537 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.851996 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.971533 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 15:06:08 crc kubenswrapper[4620]: I0129 15:06:08.973440 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.075047 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.151089 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.260602 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.314398 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.402588 4620 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.412106 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.488283 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.499501 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.512165 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.521488 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.607911 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.661871 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.664773 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.732289 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.745105 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.842459 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.881402 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.894485 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.895733 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.926664 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.929266 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.958169 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 15:06:09 crc kubenswrapper[4620]: I0129 15:06:09.974512 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.003225 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.007662 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.191346 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.266709 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.273829 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.298724 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.315836 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.316536 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.346793 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.392581 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.495271 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.519736 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.622011 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.723843 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.739638 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.837106 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.923658 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 15:06:10 crc kubenswrapper[4620]: I0129 15:06:10.948263 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.100358 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.114950 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.252302 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.259292 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.445509 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.464500 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.531418 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.531947 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.599360 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.683958 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.696921 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.796279 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.809420 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 15:06:11 crc kubenswrapper[4620]: I0129 15:06:11.968378 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.075953 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.087296 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.214104 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.376168 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.408279 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.457588 4620 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.460802 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=46.460751496 podStartE2EDuration="46.460751496s" podCreationTimestamp="2026-01-29 15:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:05:46.572608503 +0000 UTC m=+287.185436148" watchObservedRunningTime="2026-01-29 15:06:12.460751496 +0000 UTC m=+313.073579151" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.461566 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-f8tnf" podStartSLOduration=47.461556553 podStartE2EDuration="47.461556553s" podCreationTimestamp="2026-01-29 15:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:06:08.465257705 +0000 UTC m=+309.078085350" watchObservedRunningTime="2026-01-29 15:06:12.461556553 +0000 UTC m=+313.074384218" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.462824 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-669zb","openshift-marketplace/certified-operators-mzcfs","openshift-marketplace/marketplace-operator-79b997595-zsh8m","openshift-marketplace/redhat-marketplace-xwr75","openshift-marketplace/community-operators-l8bmh","openshift-marketplace/community-operators-jn68n","openshift-marketplace/redhat-operators-tqw2q","openshift-marketplace/redhat-marketplace-mn8xm","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.463296 4620 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7cb05bef-ef02-4ac8-8d18-4d13ded49beb" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.463321 4620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7cb05bef-ef02-4ac8-8d18-4d13ded49beb" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.462930 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.469643 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.544466 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.559239 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.879383 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2" path="/var/lib/kubelet/pods/0bc38d4d-ec23-4e67-a3b7-48b975d6c2a2/volumes" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.880015 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff" path="/var/lib/kubelet/pods/2c0b8ec8-eb17-4974-9f60-bb33ec9df3ff/volumes" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.880535 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60d03ebd-82d4-4ffe-895e-2909c15480d7" path="/var/lib/kubelet/pods/60d03ebd-82d4-4ffe-895e-2909c15480d7/volumes" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.881073 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67739a3a-d009-4685-a79a-aaa81f5b2daf" path="/var/lib/kubelet/pods/67739a3a-d009-4685-a79a-aaa81f5b2daf/volumes" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.881647 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7436f3bf-66e4-4314-aa0b-8af645dd5bee" path="/var/lib/kubelet/pods/7436f3bf-66e4-4314-aa0b-8af645dd5bee/volumes" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.882215 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ced9400-3ad4-4717-ab18-327d5e40daa4" path="/var/lib/kubelet/pods/8ced9400-3ad4-4717-ab18-327d5e40daa4/volumes" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.882812 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f8b2b8-8396-425a-94ac-e66deddac937" path="/var/lib/kubelet/pods/a7f8b2b8-8396-425a-94ac-e66deddac937/volumes" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.883406 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d401dd8c-5cfc-4cbd-92de-bb9896e90ea0" path="/var/lib/kubelet/pods/d401dd8c-5cfc-4cbd-92de-bb9896e90ea0/volumes" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.938025 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.978578 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:06:12 crc kubenswrapper[4620]: I0129 15:06:12.980667 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 15:06:13 crc kubenswrapper[4620]: I0129 15:06:13.007044 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 15:06:13 crc kubenswrapper[4620]: I0129 15:06:13.188582 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 15:06:13 crc kubenswrapper[4620]: I0129 15:06:13.317607 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 15:06:13 crc kubenswrapper[4620]: I0129 15:06:13.755111 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 15:06:13 crc kubenswrapper[4620]: I0129 15:06:13.837598 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 15:06:13 crc kubenswrapper[4620]: I0129 15:06:13.862862 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 15:06:13 crc kubenswrapper[4620]: I0129 15:06:13.901830 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 15:06:13 crc kubenswrapper[4620]: I0129 15:06:13.967115 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 15:06:14 crc kubenswrapper[4620]: I0129 15:06:14.026522 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 15:06:14 crc kubenswrapper[4620]: I0129 15:06:14.774189 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 15:06:14 crc kubenswrapper[4620]: I0129 15:06:14.782101 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 15:06:14 crc kubenswrapper[4620]: I0129 15:06:14.892722 4620 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 15:06:15 crc kubenswrapper[4620]: I0129 15:06:15.093509 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 15:06:15 crc kubenswrapper[4620]: I0129 15:06:15.872240 4620 scope.go:117] "RemoveContainer" containerID="99efac41da5be52a6a1960a76a5f9b787fa7d8ea028e2c14e38289088328875e" Jan 29 15:06:15 crc kubenswrapper[4620]: E0129 15:06:15.872433 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:06:15 crc kubenswrapper[4620]: I0129 15:06:15.885834 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=29.885815172 podStartE2EDuration="29.885815172s" podCreationTimestamp="2026-01-29 15:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:06:12.48724587 +0000 UTC m=+313.100073525" watchObservedRunningTime="2026-01-29 15:06:15.885815172 +0000 UTC m=+316.498642817" Jan 29 15:06:20 crc kubenswrapper[4620]: I0129 15:06:20.376797 4620 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:06:20 crc kubenswrapper[4620]: I0129 15:06:20.378405 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://6819a3f616c654bbeaae8b2467be9e13f8ffb404bf563db4646b746c1eb3588d" gracePeriod=5 Jan 29 15:06:23 crc kubenswrapper[4620]: I0129 15:06:23.993610 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:06:25 crc kubenswrapper[4620]: I0129 15:06:25.529869 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:06:25 crc kubenswrapper[4620]: I0129 15:06:25.530327 4620 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="6819a3f616c654bbeaae8b2467be9e13f8ffb404bf563db4646b746c1eb3588d" exitCode=137 Jan 29 15:06:25 crc kubenswrapper[4620]: I0129 15:06:25.995444 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:06:25 crc kubenswrapper[4620]: I0129 15:06:25.995529 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.107506 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.107627 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.107688 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.107715 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.107716 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.107741 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.107797 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.107897 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.107882 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.108150 4620 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.108163 4620 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.108172 4620 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.108181 4620 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.117369 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.209151 4620 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.536899 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.536987 4620 scope.go:117] "RemoveContainer" containerID="6819a3f616c654bbeaae8b2467be9e13f8ffb404bf563db4646b746c1eb3588d" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.537350 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.879267 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.879506 4620 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.892560 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.892611 4620 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c37281f3-1a41-4eb3-8a52-5b7548907a17" Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.896109 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:06:26 crc kubenswrapper[4620]: I0129 15:06:26.896143 4620 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c37281f3-1a41-4eb3-8a52-5b7548907a17" Jan 29 15:06:28 crc kubenswrapper[4620]: I0129 15:06:28.872862 4620 scope.go:117] "RemoveContainer" containerID="99efac41da5be52a6a1960a76a5f9b787fa7d8ea028e2c14e38289088328875e" Jan 29 15:06:29 crc kubenswrapper[4620]: I0129 15:06:29.557896 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Jan 29 15:06:29 crc kubenswrapper[4620]: I0129 15:06:29.557947 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"37df9f95a9dc47c30a6f856ed6024b17b1651afbd0935a97e8868fa1704563c6"} Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.767382 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j8dxg"] Jan 29 15:06:37 crc kubenswrapper[4620]: E0129 15:06:37.768126 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.768139 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:06:37 crc kubenswrapper[4620]: E0129 15:06:37.768163 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" containerName="installer" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.768170 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" containerName="installer" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.768254 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a169668-f370-4ff5-bea7-162091cf8c49" containerName="installer" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.768265 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.768906 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.771179 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.785347 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j8dxg"] Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.858659 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d0c7733-6f6a-4468-9065-7ca4df3cdc68-catalog-content\") pod \"redhat-operators-j8dxg\" (UID: \"2d0c7733-6f6a-4468-9065-7ca4df3cdc68\") " pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.858729 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d0c7733-6f6a-4468-9065-7ca4df3cdc68-utilities\") pod \"redhat-operators-j8dxg\" (UID: \"2d0c7733-6f6a-4468-9065-7ca4df3cdc68\") " pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.858748 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl5jj\" (UniqueName: \"kubernetes.io/projected/2d0c7733-6f6a-4468-9065-7ca4df3cdc68-kube-api-access-dl5jj\") pod \"redhat-operators-j8dxg\" (UID: \"2d0c7733-6f6a-4468-9065-7ca4df3cdc68\") " pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.959562 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d0c7733-6f6a-4468-9065-7ca4df3cdc68-utilities\") pod \"redhat-operators-j8dxg\" (UID: \"2d0c7733-6f6a-4468-9065-7ca4df3cdc68\") " pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.959614 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl5jj\" (UniqueName: \"kubernetes.io/projected/2d0c7733-6f6a-4468-9065-7ca4df3cdc68-kube-api-access-dl5jj\") pod \"redhat-operators-j8dxg\" (UID: \"2d0c7733-6f6a-4468-9065-7ca4df3cdc68\") " pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.961813 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d0c7733-6f6a-4468-9065-7ca4df3cdc68-catalog-content\") pod \"redhat-operators-j8dxg\" (UID: \"2d0c7733-6f6a-4468-9065-7ca4df3cdc68\") " pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.962283 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d0c7733-6f6a-4468-9065-7ca4df3cdc68-catalog-content\") pod \"redhat-operators-j8dxg\" (UID: \"2d0c7733-6f6a-4468-9065-7ca4df3cdc68\") " pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.962496 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d0c7733-6f6a-4468-9065-7ca4df3cdc68-utilities\") pod \"redhat-operators-j8dxg\" (UID: \"2d0c7733-6f6a-4468-9065-7ca4df3cdc68\") " pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.968966 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-89hq7"] Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.971381 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.981091 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.984345 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89hq7"] Jan 29 15:06:37 crc kubenswrapper[4620]: I0129 15:06:37.998343 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl5jj\" (UniqueName: \"kubernetes.io/projected/2d0c7733-6f6a-4468-9065-7ca4df3cdc68-kube-api-access-dl5jj\") pod \"redhat-operators-j8dxg\" (UID: \"2d0c7733-6f6a-4468-9065-7ca4df3cdc68\") " pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.064492 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c4g7\" (UniqueName: \"kubernetes.io/projected/1f6d8d26-432a-439a-b3d5-a122230a094f-kube-api-access-6c4g7\") pod \"redhat-marketplace-89hq7\" (UID: \"1f6d8d26-432a-439a-b3d5-a122230a094f\") " pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.064593 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6d8d26-432a-439a-b3d5-a122230a094f-utilities\") pod \"redhat-marketplace-89hq7\" (UID: \"1f6d8d26-432a-439a-b3d5-a122230a094f\") " pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.064785 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6d8d26-432a-439a-b3d5-a122230a094f-catalog-content\") pod \"redhat-marketplace-89hq7\" (UID: \"1f6d8d26-432a-439a-b3d5-a122230a094f\") " pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.102780 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.166591 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c4g7\" (UniqueName: \"kubernetes.io/projected/1f6d8d26-432a-439a-b3d5-a122230a094f-kube-api-access-6c4g7\") pod \"redhat-marketplace-89hq7\" (UID: \"1f6d8d26-432a-439a-b3d5-a122230a094f\") " pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.166714 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6d8d26-432a-439a-b3d5-a122230a094f-utilities\") pod \"redhat-marketplace-89hq7\" (UID: \"1f6d8d26-432a-439a-b3d5-a122230a094f\") " pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.166809 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6d8d26-432a-439a-b3d5-a122230a094f-catalog-content\") pod \"redhat-marketplace-89hq7\" (UID: \"1f6d8d26-432a-439a-b3d5-a122230a094f\") " pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.167792 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6d8d26-432a-439a-b3d5-a122230a094f-catalog-content\") pod \"redhat-marketplace-89hq7\" (UID: \"1f6d8d26-432a-439a-b3d5-a122230a094f\") " pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.168827 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6d8d26-432a-439a-b3d5-a122230a094f-utilities\") pod \"redhat-marketplace-89hq7\" (UID: \"1f6d8d26-432a-439a-b3d5-a122230a094f\") " pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.196614 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c4g7\" (UniqueName: \"kubernetes.io/projected/1f6d8d26-432a-439a-b3d5-a122230a094f-kube-api-access-6c4g7\") pod \"redhat-marketplace-89hq7\" (UID: \"1f6d8d26-432a-439a-b3d5-a122230a094f\") " pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.299990 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.497333 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j8dxg"] Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.602283 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8dxg" event={"ID":"2d0c7733-6f6a-4468-9065-7ca4df3cdc68","Type":"ContainerStarted","Data":"43a61ff32365abc4a45ac4ae3fc3b34f593fe390b4e45867fafae2698944d93e"} Jan 29 15:06:38 crc kubenswrapper[4620]: I0129 15:06:38.689798 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89hq7"] Jan 29 15:06:38 crc kubenswrapper[4620]: W0129 15:06:38.709350 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f6d8d26_432a_439a_b3d5_a122230a094f.slice/crio-b7d943cb0922b367aa1aba804646c028d223ede9fe092761d13feed52a570b13 WatchSource:0}: Error finding container b7d943cb0922b367aa1aba804646c028d223ede9fe092761d13feed52a570b13: Status 404 returned error can't find the container with id b7d943cb0922b367aa1aba804646c028d223ede9fe092761d13feed52a570b13 Jan 29 15:06:39 crc kubenswrapper[4620]: I0129 15:06:39.613870 4620 generic.go:334] "Generic (PLEG): container finished" podID="2d0c7733-6f6a-4468-9065-7ca4df3cdc68" containerID="8dadccd5f64e04ab0fabe54bf2addbd7850e57ae95cc480d901848754348dba4" exitCode=0 Jan 29 15:06:39 crc kubenswrapper[4620]: I0129 15:06:39.613997 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8dxg" event={"ID":"2d0c7733-6f6a-4468-9065-7ca4df3cdc68","Type":"ContainerDied","Data":"8dadccd5f64e04ab0fabe54bf2addbd7850e57ae95cc480d901848754348dba4"} Jan 29 15:06:39 crc kubenswrapper[4620]: I0129 15:06:39.616847 4620 generic.go:334] "Generic (PLEG): container finished" podID="1f6d8d26-432a-439a-b3d5-a122230a094f" containerID="96751087e70c9f6e911290ebf89bc649f86fccdb8ecd0ecf86319bb0c30dc4c7" exitCode=0 Jan 29 15:06:39 crc kubenswrapper[4620]: I0129 15:06:39.616906 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89hq7" event={"ID":"1f6d8d26-432a-439a-b3d5-a122230a094f","Type":"ContainerDied","Data":"96751087e70c9f6e911290ebf89bc649f86fccdb8ecd0ecf86319bb0c30dc4c7"} Jan 29 15:06:39 crc kubenswrapper[4620]: I0129 15:06:39.616935 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89hq7" event={"ID":"1f6d8d26-432a-439a-b3d5-a122230a094f","Type":"ContainerStarted","Data":"b7d943cb0922b367aa1aba804646c028d223ede9fe092761d13feed52a570b13"} Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.362872 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gjnp4"] Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.365020 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.368237 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.376434 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjnp4"] Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.500466 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b558081-fc79-4ae4-b2e5-b9ea0da28279-catalog-content\") pod \"community-operators-gjnp4\" (UID: \"4b558081-fc79-4ae4-b2e5-b9ea0da28279\") " pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.500545 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b558081-fc79-4ae4-b2e5-b9ea0da28279-utilities\") pod \"community-operators-gjnp4\" (UID: \"4b558081-fc79-4ae4-b2e5-b9ea0da28279\") " pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.500694 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qstfx\" (UniqueName: \"kubernetes.io/projected/4b558081-fc79-4ae4-b2e5-b9ea0da28279-kube-api-access-qstfx\") pod \"community-operators-gjnp4\" (UID: \"4b558081-fc79-4ae4-b2e5-b9ea0da28279\") " pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.558843 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rdbgv"] Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.562099 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.567847 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.572653 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdbgv"] Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.602267 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b558081-fc79-4ae4-b2e5-b9ea0da28279-catalog-content\") pod \"community-operators-gjnp4\" (UID: \"4b558081-fc79-4ae4-b2e5-b9ea0da28279\") " pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.602363 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b558081-fc79-4ae4-b2e5-b9ea0da28279-utilities\") pod \"community-operators-gjnp4\" (UID: \"4b558081-fc79-4ae4-b2e5-b9ea0da28279\") " pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.602434 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qstfx\" (UniqueName: \"kubernetes.io/projected/4b558081-fc79-4ae4-b2e5-b9ea0da28279-kube-api-access-qstfx\") pod \"community-operators-gjnp4\" (UID: \"4b558081-fc79-4ae4-b2e5-b9ea0da28279\") " pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.602892 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b558081-fc79-4ae4-b2e5-b9ea0da28279-utilities\") pod \"community-operators-gjnp4\" (UID: \"4b558081-fc79-4ae4-b2e5-b9ea0da28279\") " pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.602945 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b558081-fc79-4ae4-b2e5-b9ea0da28279-catalog-content\") pod \"community-operators-gjnp4\" (UID: \"4b558081-fc79-4ae4-b2e5-b9ea0da28279\") " pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.643071 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qstfx\" (UniqueName: \"kubernetes.io/projected/4b558081-fc79-4ae4-b2e5-b9ea0da28279-kube-api-access-qstfx\") pod \"community-operators-gjnp4\" (UID: \"4b558081-fc79-4ae4-b2e5-b9ea0da28279\") " pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.688917 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.703770 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsqtx\" (UniqueName: \"kubernetes.io/projected/0dcb80a9-ab19-465f-a6e3-0b287574f166-kube-api-access-tsqtx\") pod \"certified-operators-rdbgv\" (UID: \"0dcb80a9-ab19-465f-a6e3-0b287574f166\") " pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.703851 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcb80a9-ab19-465f-a6e3-0b287574f166-utilities\") pod \"certified-operators-rdbgv\" (UID: \"0dcb80a9-ab19-465f-a6e3-0b287574f166\") " pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.704028 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcb80a9-ab19-465f-a6e3-0b287574f166-catalog-content\") pod \"certified-operators-rdbgv\" (UID: \"0dcb80a9-ab19-465f-a6e3-0b287574f166\") " pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.805328 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcb80a9-ab19-465f-a6e3-0b287574f166-utilities\") pod \"certified-operators-rdbgv\" (UID: \"0dcb80a9-ab19-465f-a6e3-0b287574f166\") " pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.805418 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcb80a9-ab19-465f-a6e3-0b287574f166-catalog-content\") pod \"certified-operators-rdbgv\" (UID: \"0dcb80a9-ab19-465f-a6e3-0b287574f166\") " pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.805470 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsqtx\" (UniqueName: \"kubernetes.io/projected/0dcb80a9-ab19-465f-a6e3-0b287574f166-kube-api-access-tsqtx\") pod \"certified-operators-rdbgv\" (UID: \"0dcb80a9-ab19-465f-a6e3-0b287574f166\") " pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.806299 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dcb80a9-ab19-465f-a6e3-0b287574f166-utilities\") pod \"certified-operators-rdbgv\" (UID: \"0dcb80a9-ab19-465f-a6e3-0b287574f166\") " pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.806333 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dcb80a9-ab19-465f-a6e3-0b287574f166-catalog-content\") pod \"certified-operators-rdbgv\" (UID: \"0dcb80a9-ab19-465f-a6e3-0b287574f166\") " pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.829887 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsqtx\" (UniqueName: \"kubernetes.io/projected/0dcb80a9-ab19-465f-a6e3-0b287574f166-kube-api-access-tsqtx\") pod \"certified-operators-rdbgv\" (UID: \"0dcb80a9-ab19-465f-a6e3-0b287574f166\") " pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:06:40 crc kubenswrapper[4620]: I0129 15:06:40.880793 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:06:41 crc kubenswrapper[4620]: I0129 15:06:41.241235 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjnp4"] Jan 29 15:06:41 crc kubenswrapper[4620]: W0129 15:06:41.245116 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b558081_fc79_4ae4_b2e5_b9ea0da28279.slice/crio-b8ce5cfd5a7d2f0fa14d3e2a4ab77ca5074e2f543cc89a96fdaab04bedb2d5b8 WatchSource:0}: Error finding container b8ce5cfd5a7d2f0fa14d3e2a4ab77ca5074e2f543cc89a96fdaab04bedb2d5b8: Status 404 returned error can't find the container with id b8ce5cfd5a7d2f0fa14d3e2a4ab77ca5074e2f543cc89a96fdaab04bedb2d5b8 Jan 29 15:06:41 crc kubenswrapper[4620]: I0129 15:06:41.441015 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdbgv"] Jan 29 15:06:41 crc kubenswrapper[4620]: W0129 15:06:41.470665 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dcb80a9_ab19_465f_a6e3_0b287574f166.slice/crio-99c0208a8ff02d8a6055ba8e485666795ab374e9174194d8b346791eed34d413 WatchSource:0}: Error finding container 99c0208a8ff02d8a6055ba8e485666795ab374e9174194d8b346791eed34d413: Status 404 returned error can't find the container with id 99c0208a8ff02d8a6055ba8e485666795ab374e9174194d8b346791eed34d413 Jan 29 15:06:41 crc kubenswrapper[4620]: I0129 15:06:41.641046 4620 generic.go:334] "Generic (PLEG): container finished" podID="4b558081-fc79-4ae4-b2e5-b9ea0da28279" containerID="680d8b5682300d88456b3d09d700959063d51f0e9f4f43a4bb3c88e235dabe83" exitCode=0 Jan 29 15:06:41 crc kubenswrapper[4620]: I0129 15:06:41.641278 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnp4" event={"ID":"4b558081-fc79-4ae4-b2e5-b9ea0da28279","Type":"ContainerDied","Data":"680d8b5682300d88456b3d09d700959063d51f0e9f4f43a4bb3c88e235dabe83"} Jan 29 15:06:41 crc kubenswrapper[4620]: I0129 15:06:41.641314 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnp4" event={"ID":"4b558081-fc79-4ae4-b2e5-b9ea0da28279","Type":"ContainerStarted","Data":"b8ce5cfd5a7d2f0fa14d3e2a4ab77ca5074e2f543cc89a96fdaab04bedb2d5b8"} Jan 29 15:06:41 crc kubenswrapper[4620]: I0129 15:06:41.642865 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdbgv" event={"ID":"0dcb80a9-ab19-465f-a6e3-0b287574f166","Type":"ContainerStarted","Data":"99c0208a8ff02d8a6055ba8e485666795ab374e9174194d8b346791eed34d413"} Jan 29 15:06:42 crc kubenswrapper[4620]: I0129 15:06:42.768579 4620 generic.go:334] "Generic (PLEG): container finished" podID="0dcb80a9-ab19-465f-a6e3-0b287574f166" containerID="3cec0a8955cd4c5f25345d9d53ec7c9c9d89a5932947bb0e3e472a5d03c8a7fb" exitCode=0 Jan 29 15:06:42 crc kubenswrapper[4620]: I0129 15:06:42.768693 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdbgv" event={"ID":"0dcb80a9-ab19-465f-a6e3-0b287574f166","Type":"ContainerDied","Data":"3cec0a8955cd4c5f25345d9d53ec7c9c9d89a5932947bb0e3e472a5d03c8a7fb"} Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.138989 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-czbcl"] Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.140192 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.165406 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-czbcl"] Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.246704 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/385da08d-3ef1-43be-ac39-87ff3865c1bf-trusted-ca\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.246743 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/385da08d-3ef1-43be-ac39-87ff3865c1bf-registry-tls\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.246790 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72hbn\" (UniqueName: \"kubernetes.io/projected/385da08d-3ef1-43be-ac39-87ff3865c1bf-kube-api-access-72hbn\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.246817 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.246852 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/385da08d-3ef1-43be-ac39-87ff3865c1bf-registry-certificates\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.246877 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/385da08d-3ef1-43be-ac39-87ff3865c1bf-ca-trust-extracted\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.246895 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/385da08d-3ef1-43be-ac39-87ff3865c1bf-installation-pull-secrets\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.246931 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/385da08d-3ef1-43be-ac39-87ff3865c1bf-bound-sa-token\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.266420 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.348395 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/385da08d-3ef1-43be-ac39-87ff3865c1bf-installation-pull-secrets\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.348478 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/385da08d-3ef1-43be-ac39-87ff3865c1bf-bound-sa-token\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.348502 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/385da08d-3ef1-43be-ac39-87ff3865c1bf-trusted-ca\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.348520 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/385da08d-3ef1-43be-ac39-87ff3865c1bf-registry-tls\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.348544 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72hbn\" (UniqueName: \"kubernetes.io/projected/385da08d-3ef1-43be-ac39-87ff3865c1bf-kube-api-access-72hbn\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.348633 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/385da08d-3ef1-43be-ac39-87ff3865c1bf-registry-certificates\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.348677 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/385da08d-3ef1-43be-ac39-87ff3865c1bf-ca-trust-extracted\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.350147 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/385da08d-3ef1-43be-ac39-87ff3865c1bf-trusted-ca\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.350316 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/385da08d-3ef1-43be-ac39-87ff3865c1bf-registry-certificates\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.350393 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/385da08d-3ef1-43be-ac39-87ff3865c1bf-ca-trust-extracted\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.354241 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/385da08d-3ef1-43be-ac39-87ff3865c1bf-registry-tls\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.361219 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/385da08d-3ef1-43be-ac39-87ff3865c1bf-installation-pull-secrets\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.366805 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/385da08d-3ef1-43be-ac39-87ff3865c1bf-bound-sa-token\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.377978 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72hbn\" (UniqueName: \"kubernetes.io/projected/385da08d-3ef1-43be-ac39-87ff3865c1bf-kube-api-access-72hbn\") pod \"image-registry-66df7c8f76-czbcl\" (UID: \"385da08d-3ef1-43be-ac39-87ff3865c1bf\") " pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:06:55 crc kubenswrapper[4620]: I0129 15:06:55.458069 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:07:04 crc kubenswrapper[4620]: I0129 15:07:04.111027 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:07:04 crc kubenswrapper[4620]: I0129 15:07:04.111615 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:07:10 crc kubenswrapper[4620]: E0129 15:07:10.196806 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:07:10 crc kubenswrapper[4620]: E0129 15:07:10.197536 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qstfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gjnp4_openshift-marketplace(4b558081-fc79-4ae4-b2e5-b9ea0da28279): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:07:10 crc kubenswrapper[4620]: E0129 15:07:10.198747 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gjnp4" podUID="4b558081-fc79-4ae4-b2e5-b9ea0da28279" Jan 29 15:07:13 crc kubenswrapper[4620]: E0129 15:07:13.431232 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gjnp4" podUID="4b558081-fc79-4ae4-b2e5-b9ea0da28279" Jan 29 15:07:13 crc kubenswrapper[4620]: E0129 15:07:13.531956 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:07:13 crc kubenswrapper[4620]: E0129 15:07:13.532305 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dl5jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-j8dxg_openshift-marketplace(2d0c7733-6f6a-4468-9065-7ca4df3cdc68): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:07:13 crc kubenswrapper[4620]: E0129 15:07:13.533804 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-j8dxg" podUID="2d0c7733-6f6a-4468-9065-7ca4df3cdc68" Jan 29 15:07:13 crc kubenswrapper[4620]: I0129 15:07:13.834667 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-czbcl"] Jan 29 15:07:15 crc kubenswrapper[4620]: E0129 15:07:15.347858 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-j8dxg" podUID="2d0c7733-6f6a-4468-9065-7ca4df3cdc68" Jan 29 15:07:15 crc kubenswrapper[4620]: E0129 15:07:15.667349 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:07:15 crc kubenswrapper[4620]: E0129 15:07:15.668251 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tsqtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rdbgv_openshift-marketplace(0dcb80a9-ab19-465f-a6e3-0b287574f166): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:07:15 crc kubenswrapper[4620]: E0129 15:07:15.669511 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rdbgv" podUID="0dcb80a9-ab19-465f-a6e3-0b287574f166" Jan 29 15:07:15 crc kubenswrapper[4620]: I0129 15:07:15.949030 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89hq7" event={"ID":"1f6d8d26-432a-439a-b3d5-a122230a094f","Type":"ContainerStarted","Data":"23737124ac18a288d66d5bcef13d280801f2dcf8556162b106af855dc5469bc1"} Jan 29 15:07:15 crc kubenswrapper[4620]: I0129 15:07:15.951314 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" event={"ID":"385da08d-3ef1-43be-ac39-87ff3865c1bf","Type":"ContainerStarted","Data":"2a296e12ac96a29c482a5bbcf01557bcb870a3bc03d773dde7bb9c4ce3afe4e1"} Jan 29 15:07:15 crc kubenswrapper[4620]: I0129 15:07:15.951348 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" event={"ID":"385da08d-3ef1-43be-ac39-87ff3865c1bf","Type":"ContainerStarted","Data":"46620999a5a0b6c3d7b8a7dac47718b33c6c9dafb271cc721130fae96b44894c"} Jan 29 15:07:15 crc kubenswrapper[4620]: E0129 15:07:15.953123 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rdbgv" podUID="0dcb80a9-ab19-465f-a6e3-0b287574f166" Jan 29 15:07:15 crc kubenswrapper[4620]: I0129 15:07:15.986518 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" podStartSLOduration=20.986495947 podStartE2EDuration="20.986495947s" podCreationTimestamp="2026-01-29 15:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:07:15.98412503 +0000 UTC m=+376.596952695" watchObservedRunningTime="2026-01-29 15:07:15.986495947 +0000 UTC m=+376.599323602" Jan 29 15:07:16 crc kubenswrapper[4620]: I0129 15:07:16.957638 4620 generic.go:334] "Generic (PLEG): container finished" podID="1f6d8d26-432a-439a-b3d5-a122230a094f" containerID="23737124ac18a288d66d5bcef13d280801f2dcf8556162b106af855dc5469bc1" exitCode=0 Jan 29 15:07:16 crc kubenswrapper[4620]: I0129 15:07:16.957748 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89hq7" event={"ID":"1f6d8d26-432a-439a-b3d5-a122230a094f","Type":"ContainerDied","Data":"23737124ac18a288d66d5bcef13d280801f2dcf8556162b106af855dc5469bc1"} Jan 29 15:07:16 crc kubenswrapper[4620]: I0129 15:07:16.958566 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:07:18 crc kubenswrapper[4620]: I0129 15:07:18.969018 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89hq7" event={"ID":"1f6d8d26-432a-439a-b3d5-a122230a094f","Type":"ContainerStarted","Data":"4e58125734fbb8e2634de95608907d84fb9b1f1eac8ef89761a51671d647e5e3"} Jan 29 15:07:18 crc kubenswrapper[4620]: I0129 15:07:18.985662 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-89hq7" podStartSLOduration=3.463957941 podStartE2EDuration="41.985643263s" podCreationTimestamp="2026-01-29 15:06:37 +0000 UTC" firstStartedPulling="2026-01-29 15:06:39.618661251 +0000 UTC m=+340.231488896" lastFinishedPulling="2026-01-29 15:07:18.140346573 +0000 UTC m=+378.753174218" observedRunningTime="2026-01-29 15:07:18.98366801 +0000 UTC m=+379.596495675" watchObservedRunningTime="2026-01-29 15:07:18.985643263 +0000 UTC m=+379.598470938" Jan 29 15:07:28 crc kubenswrapper[4620]: I0129 15:07:28.300371 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:07:28 crc kubenswrapper[4620]: I0129 15:07:28.302306 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:07:28 crc kubenswrapper[4620]: I0129 15:07:28.460379 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:07:29 crc kubenswrapper[4620]: I0129 15:07:29.022121 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8dxg" event={"ID":"2d0c7733-6f6a-4468-9065-7ca4df3cdc68","Type":"ContainerStarted","Data":"faca5385c0e97a950dc06cb2f38d999b39cae86b125fc188a82d339580f45271"} Jan 29 15:07:29 crc kubenswrapper[4620]: I0129 15:07:29.025628 4620 generic.go:334] "Generic (PLEG): container finished" podID="4b558081-fc79-4ae4-b2e5-b9ea0da28279" containerID="12b2c7d4d49b54908f26da4db33c333d763ef13559028b03f0f3beededdb6aee" exitCode=0 Jan 29 15:07:29 crc kubenswrapper[4620]: I0129 15:07:29.025715 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnp4" event={"ID":"4b558081-fc79-4ae4-b2e5-b9ea0da28279","Type":"ContainerDied","Data":"12b2c7d4d49b54908f26da4db33c333d763ef13559028b03f0f3beededdb6aee"} Jan 29 15:07:29 crc kubenswrapper[4620]: I0129 15:07:29.091624 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-89hq7" Jan 29 15:07:34 crc kubenswrapper[4620]: I0129 15:07:34.111374 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:07:34 crc kubenswrapper[4620]: I0129 15:07:34.111846 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:07:35 crc kubenswrapper[4620]: I0129 15:07:35.061420 4620 generic.go:334] "Generic (PLEG): container finished" podID="2d0c7733-6f6a-4468-9065-7ca4df3cdc68" containerID="faca5385c0e97a950dc06cb2f38d999b39cae86b125fc188a82d339580f45271" exitCode=0 Jan 29 15:07:35 crc kubenswrapper[4620]: I0129 15:07:35.061475 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8dxg" event={"ID":"2d0c7733-6f6a-4468-9065-7ca4df3cdc68","Type":"ContainerDied","Data":"faca5385c0e97a950dc06cb2f38d999b39cae86b125fc188a82d339580f45271"} Jan 29 15:07:35 crc kubenswrapper[4620]: I0129 15:07:35.465052 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-czbcl" Jan 29 15:07:35 crc kubenswrapper[4620]: I0129 15:07:35.544103 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tzmcd"] Jan 29 15:07:38 crc kubenswrapper[4620]: I0129 15:07:38.082260 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnp4" event={"ID":"4b558081-fc79-4ae4-b2e5-b9ea0da28279","Type":"ContainerStarted","Data":"3a117a5d6f05cc137990065f44a2e98b0418422ac26c84a023dd4875d1b59095"} Jan 29 15:07:38 crc kubenswrapper[4620]: I0129 15:07:38.102105 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gjnp4" podStartSLOduration=2.858276935 podStartE2EDuration="58.10208703s" podCreationTimestamp="2026-01-29 15:06:40 +0000 UTC" firstStartedPulling="2026-01-29 15:06:41.642111629 +0000 UTC m=+342.254939284" lastFinishedPulling="2026-01-29 15:07:36.885921744 +0000 UTC m=+397.498749379" observedRunningTime="2026-01-29 15:07:38.101769838 +0000 UTC m=+398.714597493" watchObservedRunningTime="2026-01-29 15:07:38.10208703 +0000 UTC m=+398.714914685" Jan 29 15:07:40 crc kubenswrapper[4620]: I0129 15:07:40.690201 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:07:40 crc kubenswrapper[4620]: I0129 15:07:40.690504 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:07:40 crc kubenswrapper[4620]: I0129 15:07:40.726471 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:07:47 crc kubenswrapper[4620]: I0129 15:07:47.148889 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j8dxg" event={"ID":"2d0c7733-6f6a-4468-9065-7ca4df3cdc68","Type":"ContainerStarted","Data":"fb41a6fff0b8ed83205d1656c258346f9b1dcc702ef7a4c2e1c39e3c4f116eb4"} Jan 29 15:07:47 crc kubenswrapper[4620]: I0129 15:07:47.168966 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j8dxg" podStartSLOduration=5.390992626 podStartE2EDuration="1m10.168947416s" podCreationTimestamp="2026-01-29 15:06:37 +0000 UTC" firstStartedPulling="2026-01-29 15:06:39.615706066 +0000 UTC m=+340.228533711" lastFinishedPulling="2026-01-29 15:07:44.393660846 +0000 UTC m=+405.006488501" observedRunningTime="2026-01-29 15:07:47.168937666 +0000 UTC m=+407.781765351" watchObservedRunningTime="2026-01-29 15:07:47.168947416 +0000 UTC m=+407.781775071" Jan 29 15:07:48 crc kubenswrapper[4620]: I0129 15:07:48.103868 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:07:48 crc kubenswrapper[4620]: I0129 15:07:48.104357 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:07:49 crc kubenswrapper[4620]: I0129 15:07:49.144626 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j8dxg" podUID="2d0c7733-6f6a-4468-9065-7ca4df3cdc68" containerName="registry-server" probeResult="failure" output=< Jan 29 15:07:49 crc kubenswrapper[4620]: timeout: failed to connect service ":50051" within 1s Jan 29 15:07:49 crc kubenswrapper[4620]: > Jan 29 15:07:49 crc kubenswrapper[4620]: I0129 15:07:49.161893 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdbgv" event={"ID":"0dcb80a9-ab19-465f-a6e3-0b287574f166","Type":"ContainerStarted","Data":"104a074e6520f2b4847ad5db5bd466d72e038bbe552e23fb5f6744466745d154"} Jan 29 15:07:50 crc kubenswrapper[4620]: I0129 15:07:50.167975 4620 generic.go:334] "Generic (PLEG): container finished" podID="0dcb80a9-ab19-465f-a6e3-0b287574f166" containerID="104a074e6520f2b4847ad5db5bd466d72e038bbe552e23fb5f6744466745d154" exitCode=0 Jan 29 15:07:50 crc kubenswrapper[4620]: I0129 15:07:50.168058 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdbgv" event={"ID":"0dcb80a9-ab19-465f-a6e3-0b287574f166","Type":"ContainerDied","Data":"104a074e6520f2b4847ad5db5bd466d72e038bbe552e23fb5f6744466745d154"} Jan 29 15:07:50 crc kubenswrapper[4620]: I0129 15:07:50.752992 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gjnp4" Jan 29 15:07:51 crc kubenswrapper[4620]: I0129 15:07:51.175684 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdbgv" event={"ID":"0dcb80a9-ab19-465f-a6e3-0b287574f166","Type":"ContainerStarted","Data":"099fa55475d7e00e6782a28dca74d35d8b38dec544758dd55b4ff6630647fc89"} Jan 29 15:07:51 crc kubenswrapper[4620]: I0129 15:07:51.201839 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rdbgv" podStartSLOduration=3.346138302 podStartE2EDuration="1m11.201821297s" podCreationTimestamp="2026-01-29 15:06:40 +0000 UTC" firstStartedPulling="2026-01-29 15:06:42.771014198 +0000 UTC m=+343.383841843" lastFinishedPulling="2026-01-29 15:07:50.626697203 +0000 UTC m=+411.239524838" observedRunningTime="2026-01-29 15:07:51.196963581 +0000 UTC m=+411.809791236" watchObservedRunningTime="2026-01-29 15:07:51.201821297 +0000 UTC m=+411.814648962" Jan 29 15:07:58 crc kubenswrapper[4620]: I0129 15:07:58.150489 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:07:58 crc kubenswrapper[4620]: I0129 15:07:58.199573 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j8dxg" Jan 29 15:08:00 crc kubenswrapper[4620]: I0129 15:08:00.592790 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" podUID="8e8c35b0-3703-4eff-8610-6933e4b7b391" containerName="registry" containerID="cri-o://47b9e9f66dd0c2e49eaabd2b5d73dd51ce08a7e9f337425bbcd16e6d262e2ab7" gracePeriod=30 Jan 29 15:08:00 crc kubenswrapper[4620]: I0129 15:08:00.880955 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:08:00 crc kubenswrapper[4620]: I0129 15:08:00.881236 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:08:00 crc kubenswrapper[4620]: I0129 15:08:00.923576 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:08:00 crc kubenswrapper[4620]: I0129 15:08:00.938377 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.083493 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-registry-tls\") pod \"8e8c35b0-3703-4eff-8610-6933e4b7b391\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.083596 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-bound-sa-token\") pod \"8e8c35b0-3703-4eff-8610-6933e4b7b391\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.083638 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8e8c35b0-3703-4eff-8610-6933e4b7b391-registry-certificates\") pod \"8e8c35b0-3703-4eff-8610-6933e4b7b391\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.083696 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8e8c35b0-3703-4eff-8610-6933e4b7b391-installation-pull-secrets\") pod \"8e8c35b0-3703-4eff-8610-6933e4b7b391\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.083732 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e8c35b0-3703-4eff-8610-6933e4b7b391-trusted-ca\") pod \"8e8c35b0-3703-4eff-8610-6933e4b7b391\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.083813 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrljf\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-kube-api-access-nrljf\") pod \"8e8c35b0-3703-4eff-8610-6933e4b7b391\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.084091 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8e8c35b0-3703-4eff-8610-6933e4b7b391\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.084155 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8e8c35b0-3703-4eff-8610-6933e4b7b391-ca-trust-extracted\") pod \"8e8c35b0-3703-4eff-8610-6933e4b7b391\" (UID: \"8e8c35b0-3703-4eff-8610-6933e4b7b391\") " Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.086936 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8c35b0-3703-4eff-8610-6933e4b7b391-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8e8c35b0-3703-4eff-8610-6933e4b7b391" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.088396 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8c35b0-3703-4eff-8610-6933e4b7b391-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8e8c35b0-3703-4eff-8610-6933e4b7b391" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.096701 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-kube-api-access-nrljf" (OuterVolumeSpecName: "kube-api-access-nrljf") pod "8e8c35b0-3703-4eff-8610-6933e4b7b391" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391"). InnerVolumeSpecName "kube-api-access-nrljf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.097660 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8c35b0-3703-4eff-8610-6933e4b7b391-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8e8c35b0-3703-4eff-8610-6933e4b7b391" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.101263 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8e8c35b0-3703-4eff-8610-6933e4b7b391" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.102860 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "8e8c35b0-3703-4eff-8610-6933e4b7b391" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.105077 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e8c35b0-3703-4eff-8610-6933e4b7b391-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8e8c35b0-3703-4eff-8610-6933e4b7b391" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.109334 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8e8c35b0-3703-4eff-8610-6933e4b7b391" (UID: "8e8c35b0-3703-4eff-8610-6933e4b7b391"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.186074 4620 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.186113 4620 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8e8c35b0-3703-4eff-8610-6933e4b7b391-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.186124 4620 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8e8c35b0-3703-4eff-8610-6933e4b7b391-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.186133 4620 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e8c35b0-3703-4eff-8610-6933e4b7b391-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.186141 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrljf\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-kube-api-access-nrljf\") on node \"crc\" DevicePath \"\"" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.186153 4620 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8e8c35b0-3703-4eff-8610-6933e4b7b391-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.186163 4620 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8e8c35b0-3703-4eff-8610-6933e4b7b391-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.233791 4620 generic.go:334] "Generic (PLEG): container finished" podID="8e8c35b0-3703-4eff-8610-6933e4b7b391" containerID="47b9e9f66dd0c2e49eaabd2b5d73dd51ce08a7e9f337425bbcd16e6d262e2ab7" exitCode=0 Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.233860 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.233882 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" event={"ID":"8e8c35b0-3703-4eff-8610-6933e4b7b391","Type":"ContainerDied","Data":"47b9e9f66dd0c2e49eaabd2b5d73dd51ce08a7e9f337425bbcd16e6d262e2ab7"} Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.233929 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tzmcd" event={"ID":"8e8c35b0-3703-4eff-8610-6933e4b7b391","Type":"ContainerDied","Data":"b0840e1b6289307fb42d519cce61ff26d4e26efe36c6ea30bb0c27a2c3b68a28"} Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.233951 4620 scope.go:117] "RemoveContainer" containerID="47b9e9f66dd0c2e49eaabd2b5d73dd51ce08a7e9f337425bbcd16e6d262e2ab7" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.251415 4620 scope.go:117] "RemoveContainer" containerID="47b9e9f66dd0c2e49eaabd2b5d73dd51ce08a7e9f337425bbcd16e6d262e2ab7" Jan 29 15:08:01 crc kubenswrapper[4620]: E0129 15:08:01.252181 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47b9e9f66dd0c2e49eaabd2b5d73dd51ce08a7e9f337425bbcd16e6d262e2ab7\": container with ID starting with 47b9e9f66dd0c2e49eaabd2b5d73dd51ce08a7e9f337425bbcd16e6d262e2ab7 not found: ID does not exist" containerID="47b9e9f66dd0c2e49eaabd2b5d73dd51ce08a7e9f337425bbcd16e6d262e2ab7" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.252213 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b9e9f66dd0c2e49eaabd2b5d73dd51ce08a7e9f337425bbcd16e6d262e2ab7"} err="failed to get container status \"47b9e9f66dd0c2e49eaabd2b5d73dd51ce08a7e9f337425bbcd16e6d262e2ab7\": rpc error: code = NotFound desc = could not find container \"47b9e9f66dd0c2e49eaabd2b5d73dd51ce08a7e9f337425bbcd16e6d262e2ab7\": container with ID starting with 47b9e9f66dd0c2e49eaabd2b5d73dd51ce08a7e9f337425bbcd16e6d262e2ab7 not found: ID does not exist" Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.262737 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tzmcd"] Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.266604 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tzmcd"] Jan 29 15:08:01 crc kubenswrapper[4620]: I0129 15:08:01.273154 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rdbgv" Jan 29 15:08:02 crc kubenswrapper[4620]: I0129 15:08:02.882202 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8c35b0-3703-4eff-8610-6933e4b7b391" path="/var/lib/kubelet/pods/8e8c35b0-3703-4eff-8610-6933e4b7b391/volumes" Jan 29 15:08:04 crc kubenswrapper[4620]: I0129 15:08:04.111366 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:08:04 crc kubenswrapper[4620]: I0129 15:08:04.111471 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:08:04 crc kubenswrapper[4620]: I0129 15:08:04.111537 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:08:04 crc kubenswrapper[4620]: I0129 15:08:04.112555 4620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"148ab2afd86389a773d0f644041231ed6425f5f254b9294c6c0a376e8daec7d9"} pod="openshift-machine-config-operator/machine-config-daemon-7469t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:08:04 crc kubenswrapper[4620]: I0129 15:08:04.112713 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" containerID="cri-o://148ab2afd86389a773d0f644041231ed6425f5f254b9294c6c0a376e8daec7d9" gracePeriod=600 Jan 29 15:08:05 crc kubenswrapper[4620]: I0129 15:08:05.255686 4620 generic.go:334] "Generic (PLEG): container finished" podID="a76cce43-3d01-4158-b23a-e21fd5927792" containerID="148ab2afd86389a773d0f644041231ed6425f5f254b9294c6c0a376e8daec7d9" exitCode=0 Jan 29 15:08:05 crc kubenswrapper[4620]: I0129 15:08:05.256416 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerDied","Data":"148ab2afd86389a773d0f644041231ed6425f5f254b9294c6c0a376e8daec7d9"} Jan 29 15:08:05 crc kubenswrapper[4620]: I0129 15:08:05.256458 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"2393df41f01aa6bed609d7949bcef8efd536dd6c74fffd44de6b7f3407f9f534"} Jan 29 15:08:05 crc kubenswrapper[4620]: I0129 15:08:05.256495 4620 scope.go:117] "RemoveContainer" containerID="2b439ba4389a7208e5e85554fd7f1f75d8456577712f9f05f714083219a6c529" Jan 29 15:10:04 crc kubenswrapper[4620]: I0129 15:10:04.111036 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:10:04 crc kubenswrapper[4620]: I0129 15:10:04.112828 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:10:34 crc kubenswrapper[4620]: I0129 15:10:34.111583 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:10:34 crc kubenswrapper[4620]: I0129 15:10:34.112543 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:11:04 crc kubenswrapper[4620]: I0129 15:11:04.111255 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:11:04 crc kubenswrapper[4620]: I0129 15:11:04.111909 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:11:04 crc kubenswrapper[4620]: I0129 15:11:04.111968 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:11:04 crc kubenswrapper[4620]: I0129 15:11:04.112500 4620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2393df41f01aa6bed609d7949bcef8efd536dd6c74fffd44de6b7f3407f9f534"} pod="openshift-machine-config-operator/machine-config-daemon-7469t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:11:04 crc kubenswrapper[4620]: I0129 15:11:04.112557 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" containerID="cri-o://2393df41f01aa6bed609d7949bcef8efd536dd6c74fffd44de6b7f3407f9f534" gracePeriod=600 Jan 29 15:11:05 crc kubenswrapper[4620]: I0129 15:11:05.254552 4620 generic.go:334] "Generic (PLEG): container finished" podID="a76cce43-3d01-4158-b23a-e21fd5927792" containerID="2393df41f01aa6bed609d7949bcef8efd536dd6c74fffd44de6b7f3407f9f534" exitCode=0 Jan 29 15:11:05 crc kubenswrapper[4620]: I0129 15:11:05.254630 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerDied","Data":"2393df41f01aa6bed609d7949bcef8efd536dd6c74fffd44de6b7f3407f9f534"} Jan 29 15:11:05 crc kubenswrapper[4620]: I0129 15:11:05.255156 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"66781c2a016809706f71009a78161c76f619ea542b24dd9a5d78b07cc0a0ddc6"} Jan 29 15:11:05 crc kubenswrapper[4620]: I0129 15:11:05.255180 4620 scope.go:117] "RemoveContainer" containerID="148ab2afd86389a773d0f644041231ed6425f5f254b9294c6c0a376e8daec7d9" Jan 29 15:11:13 crc kubenswrapper[4620]: I0129 15:11:13.490585 4620 scope.go:117] "RemoveContainer" containerID="de9704558922d4c26c7e7fe59f5c67002b16fd8cd5a45af29778dc9870187866" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.401443 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-c7xrn"] Jan 29 15:12:00 crc kubenswrapper[4620]: E0129 15:12:00.402276 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8c35b0-3703-4eff-8610-6933e4b7b391" containerName="registry" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.402294 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8c35b0-3703-4eff-8610-6933e4b7b391" containerName="registry" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.402415 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8c35b0-3703-4eff-8610-6933e4b7b391" containerName="registry" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.402911 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-c7xrn" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.406781 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.407221 4620 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-k7q7m" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.410319 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.416021 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-c7xrn"] Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.424657 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-cvcw9"] Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.425499 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-cvcw9" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.429288 4620 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-ffq8z" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.447292 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-cvcw9"] Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.459805 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-cnndp"] Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.460590 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-cnndp" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.465619 4620 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-h7mw2" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.483979 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-cnndp"] Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.543252 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch4nj\" (UniqueName: \"kubernetes.io/projected/df87fa1f-48eb-453e-8f45-d9cb15f59c04-kube-api-access-ch4nj\") pod \"cert-manager-858654f9db-cvcw9\" (UID: \"df87fa1f-48eb-453e-8f45-d9cb15f59c04\") " pod="cert-manager/cert-manager-858654f9db-cvcw9" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.543315 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlxr\" (UniqueName: \"kubernetes.io/projected/14260386-a556-4e0b-9915-ddca9755fd9c-kube-api-access-7rlxr\") pod \"cert-manager-cainjector-cf98fcc89-c7xrn\" (UID: \"14260386-a556-4e0b-9915-ddca9755fd9c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-c7xrn" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.644828 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch4nj\" (UniqueName: \"kubernetes.io/projected/df87fa1f-48eb-453e-8f45-d9cb15f59c04-kube-api-access-ch4nj\") pod \"cert-manager-858654f9db-cvcw9\" (UID: \"df87fa1f-48eb-453e-8f45-d9cb15f59c04\") " pod="cert-manager/cert-manager-858654f9db-cvcw9" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.644881 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlxr\" (UniqueName: \"kubernetes.io/projected/14260386-a556-4e0b-9915-ddca9755fd9c-kube-api-access-7rlxr\") pod \"cert-manager-cainjector-cf98fcc89-c7xrn\" (UID: \"14260386-a556-4e0b-9915-ddca9755fd9c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-c7xrn" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.644957 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z49wr\" (UniqueName: \"kubernetes.io/projected/28179b6f-f84a-4a18-97a0-5a2f6f06423e-kube-api-access-z49wr\") pod \"cert-manager-webhook-687f57d79b-cnndp\" (UID: \"28179b6f-f84a-4a18-97a0-5a2f6f06423e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-cnndp" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.664285 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlxr\" (UniqueName: \"kubernetes.io/projected/14260386-a556-4e0b-9915-ddca9755fd9c-kube-api-access-7rlxr\") pod \"cert-manager-cainjector-cf98fcc89-c7xrn\" (UID: \"14260386-a556-4e0b-9915-ddca9755fd9c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-c7xrn" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.666166 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch4nj\" (UniqueName: \"kubernetes.io/projected/df87fa1f-48eb-453e-8f45-d9cb15f59c04-kube-api-access-ch4nj\") pod \"cert-manager-858654f9db-cvcw9\" (UID: \"df87fa1f-48eb-453e-8f45-d9cb15f59c04\") " pod="cert-manager/cert-manager-858654f9db-cvcw9" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.727485 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-c7xrn" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.739501 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-cvcw9" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.749781 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z49wr\" (UniqueName: \"kubernetes.io/projected/28179b6f-f84a-4a18-97a0-5a2f6f06423e-kube-api-access-z49wr\") pod \"cert-manager-webhook-687f57d79b-cnndp\" (UID: \"28179b6f-f84a-4a18-97a0-5a2f6f06423e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-cnndp" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.770633 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z49wr\" (UniqueName: \"kubernetes.io/projected/28179b6f-f84a-4a18-97a0-5a2f6f06423e-kube-api-access-z49wr\") pod \"cert-manager-webhook-687f57d79b-cnndp\" (UID: \"28179b6f-f84a-4a18-97a0-5a2f6f06423e\") " pod="cert-manager/cert-manager-webhook-687f57d79b-cnndp" Jan 29 15:12:00 crc kubenswrapper[4620]: I0129 15:12:00.773596 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-cnndp" Jan 29 15:12:01 crc kubenswrapper[4620]: I0129 15:12:01.102562 4620 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:12:01 crc kubenswrapper[4620]: I0129 15:12:01.128308 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-c7xrn"] Jan 29 15:12:01 crc kubenswrapper[4620]: I0129 15:12:01.136421 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-cvcw9"] Jan 29 15:12:01 crc kubenswrapper[4620]: I0129 15:12:01.157400 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-cnndp"] Jan 29 15:12:01 crc kubenswrapper[4620]: W0129 15:12:01.159576 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28179b6f_f84a_4a18_97a0_5a2f6f06423e.slice/crio-4bc51db03428d38bcf527c9dda53a30937bf5e40e5f51787d06bbfb12d4cc58c WatchSource:0}: Error finding container 4bc51db03428d38bcf527c9dda53a30937bf5e40e5f51787d06bbfb12d4cc58c: Status 404 returned error can't find the container with id 4bc51db03428d38bcf527c9dda53a30937bf5e40e5f51787d06bbfb12d4cc58c Jan 29 15:12:01 crc kubenswrapper[4620]: I0129 15:12:01.576589 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-cvcw9" event={"ID":"df87fa1f-48eb-453e-8f45-d9cb15f59c04","Type":"ContainerStarted","Data":"f9a636366001bccdf01dcd2931ed22b296ec8767aa14e95fa070b5f9e9603cdd"} Jan 29 15:12:01 crc kubenswrapper[4620]: I0129 15:12:01.577405 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-cnndp" event={"ID":"28179b6f-f84a-4a18-97a0-5a2f6f06423e","Type":"ContainerStarted","Data":"4bc51db03428d38bcf527c9dda53a30937bf5e40e5f51787d06bbfb12d4cc58c"} Jan 29 15:12:01 crc kubenswrapper[4620]: I0129 15:12:01.578511 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-c7xrn" event={"ID":"14260386-a556-4e0b-9915-ddca9755fd9c","Type":"ContainerStarted","Data":"9b1a929e6c05ffbacc967b9428d299659dc9bc5bf126b732ffb2a595e572ac1e"} Jan 29 15:12:09 crc kubenswrapper[4620]: I0129 15:12:09.977790 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ks4d9"] Jan 29 15:12:09 crc kubenswrapper[4620]: I0129 15:12:09.979373 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovn-controller" containerID="cri-o://0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96" gracePeriod=30 Jan 29 15:12:09 crc kubenswrapper[4620]: I0129 15:12:09.979460 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c" gracePeriod=30 Jan 29 15:12:09 crc kubenswrapper[4620]: I0129 15:12:09.979469 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="northd" containerID="cri-o://73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b" gracePeriod=30 Jan 29 15:12:09 crc kubenswrapper[4620]: I0129 15:12:09.979587 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="sbdb" containerID="cri-o://cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6" gracePeriod=30 Jan 29 15:12:09 crc kubenswrapper[4620]: I0129 15:12:09.979621 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="nbdb" containerID="cri-o://9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4" gracePeriod=30 Jan 29 15:12:09 crc kubenswrapper[4620]: I0129 15:12:09.979597 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovn-acl-logging" containerID="cri-o://bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98" gracePeriod=30 Jan 29 15:12:09 crc kubenswrapper[4620]: I0129 15:12:09.979595 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="kube-rbac-proxy-node" containerID="cri-o://4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac" gracePeriod=30 Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.038001 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" containerID="cri-o://873f8b03c9f47305fc2108ee369b2291ef0cc1e17757d1d8933c564550177d55" gracePeriod=30 Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.622650 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/3.log" Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.625656 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovn-acl-logging/0.log" Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.626296 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovn-controller/0.log" Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.626819 4620 generic.go:334] "Generic (PLEG): container finished" podID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerID="cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6" exitCode=0 Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.626864 4620 generic.go:334] "Generic (PLEG): container finished" podID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerID="72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c" exitCode=0 Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.626874 4620 generic.go:334] "Generic (PLEG): container finished" podID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerID="4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac" exitCode=0 Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.626887 4620 generic.go:334] "Generic (PLEG): container finished" podID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerID="bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98" exitCode=143 Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.626896 4620 generic.go:334] "Generic (PLEG): container finished" podID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerID="0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96" exitCode=143 Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.626989 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6"} Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.627046 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c"} Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.627062 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac"} Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.627075 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98"} Jan 29 15:12:10 crc kubenswrapper[4620]: I0129 15:12:10.627090 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96"} Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.634995 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovnkube-controller/3.log" Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.637469 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovn-acl-logging/0.log" Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.638556 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovn-controller/0.log" Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.640367 4620 generic.go:334] "Generic (PLEG): container finished" podID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerID="873f8b03c9f47305fc2108ee369b2291ef0cc1e17757d1d8933c564550177d55" exitCode=0 Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.640397 4620 generic.go:334] "Generic (PLEG): container finished" podID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerID="9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4" exitCode=0 Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.640406 4620 generic.go:334] "Generic (PLEG): container finished" podID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerID="73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b" exitCode=0 Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.640422 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"873f8b03c9f47305fc2108ee369b2291ef0cc1e17757d1d8933c564550177d55"} Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.640482 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4"} Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.640493 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b"} Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.640515 4620 scope.go:117] "RemoveContainer" containerID="375b349ffca0abfa8fe8f373c5b9c6ec6c82b0f7668f7a366f51df4e8c808fe9" Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.646635 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tlwgt_f66b658d-e5ec-445e-9494-0a0062e87c4c/kube-multus/2.log" Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.647232 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tlwgt_f66b658d-e5ec-445e-9494-0a0062e87c4c/kube-multus/1.log" Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.647269 4620 generic.go:334] "Generic (PLEG): container finished" podID="f66b658d-e5ec-445e-9494-0a0062e87c4c" containerID="8e9ea767b98760df113964c83b8aa3b0a3171651da218d1ca361f2f91ef91add" exitCode=2 Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.647297 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tlwgt" event={"ID":"f66b658d-e5ec-445e-9494-0a0062e87c4c","Type":"ContainerDied","Data":"8e9ea767b98760df113964c83b8aa3b0a3171651da218d1ca361f2f91ef91add"} Jan 29 15:12:11 crc kubenswrapper[4620]: I0129 15:12:11.647700 4620 scope.go:117] "RemoveContainer" containerID="8e9ea767b98760df113964c83b8aa3b0a3171651da218d1ca361f2f91ef91add" Jan 29 15:12:11 crc kubenswrapper[4620]: E0129 15:12:11.648007 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-tlwgt_openshift-multus(f66b658d-e5ec-445e-9494-0a0062e87c4c)\"" pod="openshift-multus/multus-tlwgt" podUID="f66b658d-e5ec-445e-9494-0a0062e87c4c" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.133669 4620 scope.go:117] "RemoveContainer" containerID="ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.169710 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovn-acl-logging/0.log" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.170693 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovn-controller/0.log" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.171232 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.238503 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-br9jw"] Jan 29 15:12:12 crc kubenswrapper[4620]: E0129 15:12:12.239220 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="nbdb" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239244 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="nbdb" Jan 29 15:12:12 crc kubenswrapper[4620]: E0129 15:12:12.239256 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239264 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: E0129 15:12:12.239276 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovn-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239283 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovn-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: E0129 15:12:12.239292 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239298 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: E0129 15:12:12.239306 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="kubecfg-setup" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239312 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="kubecfg-setup" Jan 29 15:12:12 crc kubenswrapper[4620]: E0129 15:12:12.239322 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="kube-rbac-proxy-node" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239328 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="kube-rbac-proxy-node" Jan 29 15:12:12 crc kubenswrapper[4620]: E0129 15:12:12.239336 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239341 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: E0129 15:12:12.239350 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovn-acl-logging" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239356 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovn-acl-logging" Jan 29 15:12:12 crc kubenswrapper[4620]: E0129 15:12:12.239369 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="sbdb" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239376 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="sbdb" Jan 29 15:12:12 crc kubenswrapper[4620]: E0129 15:12:12.239385 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="northd" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239395 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="northd" Jan 29 15:12:12 crc kubenswrapper[4620]: E0129 15:12:12.239405 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239412 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239533 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239543 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239556 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="sbdb" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239565 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovn-acl-logging" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239572 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239578 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239588 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="nbdb" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239595 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239604 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="northd" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239614 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovn-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239621 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="kube-rbac-proxy-node" Jan 29 15:12:12 crc kubenswrapper[4620]: E0129 15:12:12.239781 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239794 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: E0129 15:12:12.239815 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239824 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.239954 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" containerName="ovnkube-controller" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.243455 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.265826 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-cni-bin\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266438 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-run-netns\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266487 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-slash\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266520 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovnkube-script-lib\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266545 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-node-log\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266614 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266650 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-run-ovn-kubernetes\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266670 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-openvswitch\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266697 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-var-lib-openvswitch\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266718 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-ovn\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266785 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-etc-openvswitch\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266808 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-cni-netd\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266834 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-kubelet\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266869 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-systemd\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266898 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-env-overrides\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266936 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbvrt\" (UniqueName: \"kubernetes.io/projected/fa9cbed4-05b4-48af-81c2-9f8903dc765e-kube-api-access-bbvrt\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266971 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovnkube-config\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267048 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-systemd-units\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267076 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-log-socket\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267104 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovn-node-metrics-cert\") pod \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\" (UID: \"fa9cbed4-05b4-48af-81c2-9f8903dc765e\") " Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267317 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-run-systemd\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267376 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-run-ovn-kubernetes\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267407 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-slash\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267424 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-log-socket\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267446 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-var-lib-openvswitch\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267505 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267527 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-cni-netd\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267559 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-node-log\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267584 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-systemd-units\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267614 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-kubelet\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267653 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhpn5\" (UniqueName: \"kubernetes.io/projected/5053966b-151c-44ff-9f11-2750a48b8147-kube-api-access-mhpn5\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267674 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-run-openvswitch\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267697 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5053966b-151c-44ff-9f11-2750a48b8147-env-overrides\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267718 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5053966b-151c-44ff-9f11-2750a48b8147-ovn-node-metrics-cert\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267752 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-etc-openvswitch\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267851 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-run-ovn\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267889 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-cni-bin\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267911 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5053966b-151c-44ff-9f11-2750a48b8147-ovnkube-config\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267946 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-run-netns\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.267966 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5053966b-151c-44ff-9f11-2750a48b8147-ovnkube-script-lib\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.266273 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.268258 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.268315 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-slash" (OuterVolumeSpecName: "host-slash") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.268911 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.268956 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-node-log" (OuterVolumeSpecName: "node-log") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.268984 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.269010 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.269111 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.269149 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.269185 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.269211 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.269238 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.269263 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.269898 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.270309 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.270341 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.270364 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-log-socket" (OuterVolumeSpecName: "log-socket") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.293307 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.293857 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.297841 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9cbed4-05b4-48af-81c2-9f8903dc765e-kube-api-access-bbvrt" (OuterVolumeSpecName: "kube-api-access-bbvrt") pod "fa9cbed4-05b4-48af-81c2-9f8903dc765e" (UID: "fa9cbed4-05b4-48af-81c2-9f8903dc765e"). InnerVolumeSpecName "kube-api-access-bbvrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.369123 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-cni-bin\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.369039 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-cni-bin\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.369613 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5053966b-151c-44ff-9f11-2750a48b8147-ovnkube-config\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.370573 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-run-netns\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.370735 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5053966b-151c-44ff-9f11-2750a48b8147-ovnkube-script-lib\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.371365 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-run-systemd\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.371614 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-run-ovn-kubernetes\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.371307 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5053966b-151c-44ff-9f11-2750a48b8147-ovnkube-script-lib\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.370697 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-run-netns\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.371560 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-run-systemd\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.370523 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5053966b-151c-44ff-9f11-2750a48b8147-ovnkube-config\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.371715 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-run-ovn-kubernetes\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.372126 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-slash\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.372252 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-log-socket\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.372379 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-var-lib-openvswitch\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.372520 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.372658 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-cni-netd\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.372814 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-node-log\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.373073 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-systemd-units\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.373238 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-kubelet\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.373370 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhpn5\" (UniqueName: \"kubernetes.io/projected/5053966b-151c-44ff-9f11-2750a48b8147-kube-api-access-mhpn5\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.373882 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-run-openvswitch\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.374033 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5053966b-151c-44ff-9f11-2750a48b8147-env-overrides\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.374500 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5053966b-151c-44ff-9f11-2750a48b8147-ovn-node-metrics-cert\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.375199 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-etc-openvswitch\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.375320 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-run-ovn\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.375385 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-etc-openvswitch\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.372622 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.373334 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-kubelet\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.372749 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-cni-netd\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.373993 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-run-openvswitch\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.372474 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-var-lib-openvswitch\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.374455 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5053966b-151c-44ff-9f11-2750a48b8147-env-overrides\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.373030 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-node-log\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.372345 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-log-socket\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.373186 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-systemd-units\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.375400 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-run-ovn\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.372221 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5053966b-151c-44ff-9f11-2750a48b8147-host-slash\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.375861 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbvrt\" (UniqueName: \"kubernetes.io/projected/fa9cbed4-05b4-48af-81c2-9f8903dc765e-kube-api-access-bbvrt\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.375922 4620 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.375976 4620 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376034 4620 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376144 4620 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376205 4620 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376283 4620 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376339 4620 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376397 4620 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376450 4620 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376501 4620 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376558 4620 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376615 4620 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376682 4620 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376744 4620 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376817 4620 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376879 4620 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.376938 4620 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.377018 4620 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fa9cbed4-05b4-48af-81c2-9f8903dc765e-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.377086 4620 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa9cbed4-05b4-48af-81c2-9f8903dc765e-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.379173 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5053966b-151c-44ff-9f11-2750a48b8147-ovn-node-metrics-cert\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.395127 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhpn5\" (UniqueName: \"kubernetes.io/projected/5053966b-151c-44ff-9f11-2750a48b8147-kube-api-access-mhpn5\") pod \"ovnkube-node-br9jw\" (UID: \"5053966b-151c-44ff-9f11-2750a48b8147\") " pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.560243 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.658153 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovn-acl-logging/0.log" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.660205 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ks4d9_fa9cbed4-05b4-48af-81c2-9f8903dc765e/ovn-controller/0.log" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.660678 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" event={"ID":"fa9cbed4-05b4-48af-81c2-9f8903dc765e","Type":"ContainerDied","Data":"1fe11e6e71e3bc2a90b15dfdad5e1734964f582922f5c79e6600685b06c40ec1"} Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.660765 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ks4d9" Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.689805 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ks4d9"] Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.699202 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ks4d9"] Jan 29 15:12:12 crc kubenswrapper[4620]: I0129 15:12:12.881751 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa9cbed4-05b4-48af-81c2-9f8903dc765e" path="/var/lib/kubelet/pods/fa9cbed4-05b4-48af-81c2-9f8903dc765e/volumes" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.527931 4620 scope.go:117] "RemoveContainer" containerID="4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.805789 4620 scope.go:117] "RemoveContainer" containerID="0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.815047 4620 scope.go:117] "RemoveContainer" containerID="873f8b03c9f47305fc2108ee369b2291ef0cc1e17757d1d8933c564550177d55" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.823367 4620 scope.go:117] "RemoveContainer" containerID="cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6" Jan 29 15:12:13 crc kubenswrapper[4620]: W0129 15:12:13.830150 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5053966b_151c_44ff_9f11_2750a48b8147.slice/crio-fa65ae4eda56c44b5616292dd2e43627c717d4f1153f99497681551f9cb63a76 WatchSource:0}: Error finding container fa65ae4eda56c44b5616292dd2e43627c717d4f1153f99497681551f9cb63a76: Status 404 returned error can't find the container with id fa65ae4eda56c44b5616292dd2e43627c717d4f1153f99497681551f9cb63a76 Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.861609 4620 scope.go:117] "RemoveContainer" containerID="cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.870746 4620 scope.go:117] "RemoveContainer" containerID="873f8b03c9f47305fc2108ee369b2291ef0cc1e17757d1d8933c564550177d55" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.870922 4620 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_sbdb_ovnkube-node-ks4d9_openshift-ovn-kubernetes_fa9cbed4-05b4-48af-81c2-9f8903dc765e_0 in pod sandbox 1fe11e6e71e3bc2a90b15dfdad5e1734964f582922f5c79e6600685b06c40ec1 from index: no such id: 'cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6'" containerID="cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.870958 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6"} err="rpc error: code = Unknown desc = failed to delete container k8s_sbdb_ovnkube-node-ks4d9_openshift-ovn-kubernetes_fa9cbed4-05b4-48af-81c2-9f8903dc765e_0 in pod sandbox 1fe11e6e71e3bc2a90b15dfdad5e1734964f582922f5c79e6600685b06c40ec1 from index: no such id: 'cc1189188b1d1f012d18415b6b5def90ba2e1581beab60a0305a9770d50592c6'" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.870988 4620 scope.go:117] "RemoveContainer" containerID="9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.871316 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"873f8b03c9f47305fc2108ee369b2291ef0cc1e17757d1d8933c564550177d55\": container with ID starting with 873f8b03c9f47305fc2108ee369b2291ef0cc1e17757d1d8933c564550177d55 not found: ID does not exist" containerID="873f8b03c9f47305fc2108ee369b2291ef0cc1e17757d1d8933c564550177d55" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.871393 4620 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"873f8b03c9f47305fc2108ee369b2291ef0cc1e17757d1d8933c564550177d55\": rpc error: code = NotFound desc = could not find container \"873f8b03c9f47305fc2108ee369b2291ef0cc1e17757d1d8933c564550177d55\": container with ID starting with 873f8b03c9f47305fc2108ee369b2291ef0cc1e17757d1d8933c564550177d55 not found: ID does not exist" containerID="873f8b03c9f47305fc2108ee369b2291ef0cc1e17757d1d8933c564550177d55" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.871446 4620 scope.go:117] "RemoveContainer" containerID="0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.889099 4620 scope.go:117] "RemoveContainer" containerID="bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.894866 4620 scope.go:117] "RemoveContainer" containerID="73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.904626 4620 scope.go:117] "RemoveContainer" containerID="9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.905059 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\": container with ID starting with 9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4 not found: ID does not exist" containerID="9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.905142 4620 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\": rpc error: code = NotFound desc = could not find container \"9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4\": container with ID starting with 9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4 not found: ID does not exist" containerID="9aa3528734ddb41659bc37db6f4e5fbb053623f78eb6296bc64359355764aed4" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.905168 4620 scope.go:117] "RemoveContainer" containerID="73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.912446 4620 scope.go:117] "RemoveContainer" containerID="72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.912615 4620 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_northd_ovnkube-node-ks4d9_openshift-ovn-kubernetes_fa9cbed4-05b4-48af-81c2-9f8903dc765e_0 in pod sandbox 1fe11e6e71e3bc2a90b15dfdad5e1734964f582922f5c79e6600685b06c40ec1 from index: no such id: '73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b'" containerID="73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.912648 4620 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_northd_ovnkube-node-ks4d9_openshift-ovn-kubernetes_fa9cbed4-05b4-48af-81c2-9f8903dc765e_0 in pod sandbox 1fe11e6e71e3bc2a90b15dfdad5e1734964f582922f5c79e6600685b06c40ec1 from index: no such id: '73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b'" containerID="73792c03e7fbbaf8ba89d5be60a536256bf92e6179ca4a090572e59f4605b86b" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.912665 4620 scope.go:117] "RemoveContainer" containerID="72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.934219 4620 scope.go:117] "RemoveContainer" containerID="4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.934363 4620 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-rbac-proxy-ovn-metrics_ovnkube-node-ks4d9_openshift-ovn-kubernetes_fa9cbed4-05b4-48af-81c2-9f8903dc765e_0 in pod sandbox 1fe11e6e71e3bc2a90b15dfdad5e1734964f582922f5c79e6600685b06c40ec1 from index: no such id: '72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c'" containerID="72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.934389 4620 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-rbac-proxy-ovn-metrics_ovnkube-node-ks4d9_openshift-ovn-kubernetes_fa9cbed4-05b4-48af-81c2-9f8903dc765e_0 in pod sandbox 1fe11e6e71e3bc2a90b15dfdad5e1734964f582922f5c79e6600685b06c40ec1 from index: no such id: '72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c'" containerID="72d2cbc2de5fe2b02c768fbbfd3c5371c8f5c2ccee921329ea0e42977cb4a50c" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.934413 4620 scope.go:117] "RemoveContainer" containerID="ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.934553 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\": container with ID starting with 4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac not found: ID does not exist" containerID="4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.934588 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac"} err="failed to get container status \"4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\": rpc error: code = NotFound desc = could not find container \"4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac\": container with ID starting with 4eb91b2aecc7e3da4f6f7dbfd5ce1b9b4549d3d024be09e634434f17786673ac not found: ID does not exist" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.934614 4620 scope.go:117] "RemoveContainer" containerID="bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.934906 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\": container with ID starting with bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98 not found: ID does not exist" containerID="bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.934947 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98"} err="failed to get container status \"bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\": rpc error: code = NotFound desc = could not find container \"bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98\": container with ID starting with bac8edfa78a6d35065603dafca48436c3f443d8ad0dcc659e0dad623f46f9e98 not found: ID does not exist" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.934967 4620 scope.go:117] "RemoveContainer" containerID="0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.935421 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d\": container with ID starting with ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d not found: ID does not exist" containerID="ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.935452 4620 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d\": rpc error: code = NotFound desc = could not find container \"ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d\": container with ID starting with ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d not found: ID does not exist" containerID="ab556c57460cbb92047062018f7ce44d2e29138667978e01c0f201c9a118486d" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.935864 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\": container with ID starting with 0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96 not found: ID does not exist" containerID="0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.935950 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96"} err="failed to get container status \"0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\": rpc error: code = NotFound desc = could not find container \"0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96\": container with ID starting with 0ff07aa0fc9323b61ecd53142c0878d39b05ff43c2a6df627b98111307c8ca96 not found: ID does not exist" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.935993 4620 scope.go:117] "RemoveContainer" containerID="0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9" Jan 29 15:12:13 crc kubenswrapper[4620]: E0129 15:12:13.936469 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\": container with ID starting with 0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9 not found: ID does not exist" containerID="0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9" Jan 29 15:12:13 crc kubenswrapper[4620]: I0129 15:12:13.936495 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9"} err="failed to get container status \"0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\": rpc error: code = NotFound desc = could not find container \"0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9\": container with ID starting with 0ed6cf59b2cc09d349ccfd170548ed77aae6cfc1ddf77c47b71880c3deb799d9 not found: ID does not exist" Jan 29 15:12:14 crc kubenswrapper[4620]: I0129 15:12:14.675344 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tlwgt_f66b658d-e5ec-445e-9494-0a0062e87c4c/kube-multus/2.log" Jan 29 15:12:14 crc kubenswrapper[4620]: I0129 15:12:14.677615 4620 generic.go:334] "Generic (PLEG): container finished" podID="5053966b-151c-44ff-9f11-2750a48b8147" containerID="740097d3e356000ffecd5639dd499a166137662c49208173e416216d81a68ec0" exitCode=0 Jan 29 15:12:14 crc kubenswrapper[4620]: I0129 15:12:14.677657 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" event={"ID":"5053966b-151c-44ff-9f11-2750a48b8147","Type":"ContainerDied","Data":"740097d3e356000ffecd5639dd499a166137662c49208173e416216d81a68ec0"} Jan 29 15:12:14 crc kubenswrapper[4620]: I0129 15:12:14.677686 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" event={"ID":"5053966b-151c-44ff-9f11-2750a48b8147","Type":"ContainerStarted","Data":"fa65ae4eda56c44b5616292dd2e43627c717d4f1153f99497681551f9cb63a76"} Jan 29 15:12:15 crc kubenswrapper[4620]: I0129 15:12:15.690498 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" event={"ID":"5053966b-151c-44ff-9f11-2750a48b8147","Type":"ContainerStarted","Data":"2b03aaa3272fc9219e1abc9739b76b42619b74b2cd71984793c74d269a375ca3"} Jan 29 15:12:15 crc kubenswrapper[4620]: I0129 15:12:15.690851 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" event={"ID":"5053966b-151c-44ff-9f11-2750a48b8147","Type":"ContainerStarted","Data":"44a7da2a9ab661f0fe11869d144211e10ae02f914cef5ebbb2225a9ec39e543c"} Jan 29 15:12:16 crc kubenswrapper[4620]: I0129 15:12:16.700713 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" event={"ID":"5053966b-151c-44ff-9f11-2750a48b8147","Type":"ContainerStarted","Data":"768aa013559fd663da9280ceda9eb61730ca034d72a324a85815b4a6e861496f"} Jan 29 15:12:16 crc kubenswrapper[4620]: I0129 15:12:16.700937 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" event={"ID":"5053966b-151c-44ff-9f11-2750a48b8147","Type":"ContainerStarted","Data":"000e5a4e3fb4d17101317f06c97ce44db5192c04a1132de31a5cb000fc691e57"} Jan 29 15:12:17 crc kubenswrapper[4620]: I0129 15:12:17.707924 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" event={"ID":"5053966b-151c-44ff-9f11-2750a48b8147","Type":"ContainerStarted","Data":"15ca8ab8d7d8350c1a62efe057ba80ea28814dccee672342523af9a8e53943bd"} Jan 29 15:12:19 crc kubenswrapper[4620]: I0129 15:12:19.722661 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" event={"ID":"5053966b-151c-44ff-9f11-2750a48b8147","Type":"ContainerStarted","Data":"ab529882d90559e4058b69718e5fae8c1089547cc7fd8d5ac136a8e49f2a969c"} Jan 29 15:12:23 crc kubenswrapper[4620]: I0129 15:12:23.872651 4620 scope.go:117] "RemoveContainer" containerID="8e9ea767b98760df113964c83b8aa3b0a3171651da218d1ca361f2f91ef91add" Jan 29 15:12:23 crc kubenswrapper[4620]: E0129 15:12:23.872909 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-tlwgt_openshift-multus(f66b658d-e5ec-445e-9494-0a0062e87c4c)\"" pod="openshift-multus/multus-tlwgt" podUID="f66b658d-e5ec-445e-9494-0a0062e87c4c" Jan 29 15:12:30 crc kubenswrapper[4620]: I0129 15:12:30.796580 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" event={"ID":"5053966b-151c-44ff-9f11-2750a48b8147","Type":"ContainerStarted","Data":"359e1807a4272da4ac10aa9a43aaaa9043e3223e410896cbc18d10580ed89fed"} Jan 29 15:12:33 crc kubenswrapper[4620]: I0129 15:12:33.816448 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" event={"ID":"5053966b-151c-44ff-9f11-2750a48b8147","Type":"ContainerStarted","Data":"cc908dd9d5173dd11fb0171f432068f9e413c8327f0bb0fae00fde17da67ae57"} Jan 29 15:12:33 crc kubenswrapper[4620]: I0129 15:12:33.816808 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:33 crc kubenswrapper[4620]: I0129 15:12:33.816824 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:33 crc kubenswrapper[4620]: I0129 15:12:33.816926 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:33 crc kubenswrapper[4620]: I0129 15:12:33.846628 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" podStartSLOduration=21.846609072 podStartE2EDuration="21.846609072s" podCreationTimestamp="2026-01-29 15:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:33.843941555 +0000 UTC m=+694.456769210" watchObservedRunningTime="2026-01-29 15:12:33.846609072 +0000 UTC m=+694.459436737" Jan 29 15:12:33 crc kubenswrapper[4620]: I0129 15:12:33.850081 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:33 crc kubenswrapper[4620]: I0129 15:12:33.854030 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:35 crc kubenswrapper[4620]: I0129 15:12:35.872613 4620 scope.go:117] "RemoveContainer" containerID="8e9ea767b98760df113964c83b8aa3b0a3171651da218d1ca361f2f91ef91add" Jan 29 15:12:42 crc kubenswrapper[4620]: I0129 15:12:42.110806 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tlwgt_f66b658d-e5ec-445e-9494-0a0062e87c4c/kube-multus/2.log" Jan 29 15:12:42 crc kubenswrapper[4620]: I0129 15:12:42.111892 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tlwgt" event={"ID":"f66b658d-e5ec-445e-9494-0a0062e87c4c","Type":"ContainerStarted","Data":"20284c94b62fa1855792714601fb72a66a39c54d3ca5833ed62bcca3b491d2db"} Jan 29 15:12:42 crc kubenswrapper[4620]: E0129 15:12:42.211325 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/jetstack/cert-manager-cainjector:v1.19.2" Jan 29 15:12:42 crc kubenswrapper[4620]: E0129 15:12:42.211676 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-cainjector,Image:quay.io/jetstack/cert-manager-cainjector:v1.19.2,Command:[],Args:[--v=2 --leader-election-namespace=kube-system],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rlxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000680000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-cainjector-cf98fcc89-c7xrn_cert-manager(14260386-a556-4e0b-9915-ddca9755fd9c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:12:42 crc kubenswrapper[4620]: E0129 15:12:42.213048 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-c7xrn" podUID="14260386-a556-4e0b-9915-ddca9755fd9c" Jan 29 15:12:42 crc kubenswrapper[4620]: I0129 15:12:42.577582 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-br9jw" Jan 29 15:12:43 crc kubenswrapper[4620]: E0129 15:12:43.118115 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/jetstack/cert-manager-cainjector:v1.19.2\\\"\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-c7xrn" podUID="14260386-a556-4e0b-9915-ddca9755fd9c" Jan 29 15:12:58 crc kubenswrapper[4620]: E0129 15:12:58.728597 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/jetstack/cert-manager-controller:v1.19.2" Jan 29 15:12:58 crc kubenswrapper[4620]: E0129 15:12:58.731250 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-controller,Image:quay.io/jetstack/cert-manager-controller:v1.19.2,Command:[],Args:[--v=2 --cluster-resource-namespace=$(POD_NAMESPACE) --leader-election-namespace=kube-system --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.19.2 --max-concurrent-challenges=60],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},ContainerPort{Name:http-healthz,HostPort:0,ContainerPort:9403,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ch4nj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 http-healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000680000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-858654f9db-cvcw9_cert-manager(df87fa1f-48eb-453e-8f45-d9cb15f59c04): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:12:58 crc kubenswrapper[4620]: E0129 15:12:58.733221 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-858654f9db-cvcw9" podUID="df87fa1f-48eb-453e-8f45-d9cb15f59c04" Jan 29 15:12:59 crc kubenswrapper[4620]: I0129 15:12:59.227887 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-cnndp" event={"ID":"28179b6f-f84a-4a18-97a0-5a2f6f06423e","Type":"ContainerStarted","Data":"a9dcd50b620909b163730c4c6d6e26c3db4170b3cc4a48a863487a2293f58886"} Jan 29 15:12:59 crc kubenswrapper[4620]: I0129 15:12:59.227932 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-cnndp" Jan 29 15:12:59 crc kubenswrapper[4620]: E0129 15:12:59.229695 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/jetstack/cert-manager-controller:v1.19.2\\\"\"" pod="cert-manager/cert-manager-858654f9db-cvcw9" podUID="df87fa1f-48eb-453e-8f45-d9cb15f59c04" Jan 29 15:12:59 crc kubenswrapper[4620]: I0129 15:12:59.264660 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-cnndp" podStartSLOduration=2.835396277 podStartE2EDuration="59.264611817s" podCreationTimestamp="2026-01-29 15:12:00 +0000 UTC" firstStartedPulling="2026-01-29 15:12:01.162321859 +0000 UTC m=+661.775149504" lastFinishedPulling="2026-01-29 15:12:57.591537399 +0000 UTC m=+718.204365044" observedRunningTime="2026-01-29 15:12:59.260722411 +0000 UTC m=+719.873550076" watchObservedRunningTime="2026-01-29 15:12:59.264611817 +0000 UTC m=+719.877439472" Jan 29 15:13:02 crc kubenswrapper[4620]: I0129 15:13:02.249104 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-c7xrn" event={"ID":"14260386-a556-4e0b-9915-ddca9755fd9c","Type":"ContainerStarted","Data":"c147e64b9f4d43b0e10d7a57941db9dbb2ea69955c74509b85662f0d513ae9b9"} Jan 29 15:13:02 crc kubenswrapper[4620]: I0129 15:13:02.302823 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-c7xrn" podStartSLOduration=2.203361763 podStartE2EDuration="1m2.302797612s" podCreationTimestamp="2026-01-29 15:12:00 +0000 UTC" firstStartedPulling="2026-01-29 15:12:01.101219092 +0000 UTC m=+661.714046737" lastFinishedPulling="2026-01-29 15:13:01.200654941 +0000 UTC m=+721.813482586" observedRunningTime="2026-01-29 15:13:02.269859096 +0000 UTC m=+722.882686771" watchObservedRunningTime="2026-01-29 15:13:02.302797612 +0000 UTC m=+722.915625257" Jan 29 15:13:05 crc kubenswrapper[4620]: I0129 15:13:05.777717 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-cnndp" Jan 29 15:13:15 crc kubenswrapper[4620]: I0129 15:13:15.330355 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-cvcw9" event={"ID":"df87fa1f-48eb-453e-8f45-d9cb15f59c04","Type":"ContainerStarted","Data":"8f270c4dc52bb07a4f159e35c23045f8ed20a005389f5ae18a2f6b50af7aecff"} Jan 29 15:13:15 crc kubenswrapper[4620]: I0129 15:13:15.348601 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-cvcw9" podStartSLOduration=1.7151601520000002 podStartE2EDuration="1m15.348579289s" podCreationTimestamp="2026-01-29 15:12:00 +0000 UTC" firstStartedPulling="2026-01-29 15:12:01.111134563 +0000 UTC m=+661.723962228" lastFinishedPulling="2026-01-29 15:13:14.74455372 +0000 UTC m=+735.357381365" observedRunningTime="2026-01-29 15:13:15.346039876 +0000 UTC m=+735.958867551" watchObservedRunningTime="2026-01-29 15:13:15.348579289 +0000 UTC m=+735.961406934" Jan 29 15:13:34 crc kubenswrapper[4620]: I0129 15:13:34.110751 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:13:34 crc kubenswrapper[4620]: I0129 15:13:34.111371 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.648390 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh"] Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.650073 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.651956 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.664180 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh"] Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.798595 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/661ca67d-1c4e-4f80-a931-cb9989be0907-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh\" (UID: \"661ca67d-1c4e-4f80-a931-cb9989be0907\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.798925 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/661ca67d-1c4e-4f80-a931-cb9989be0907-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh\" (UID: \"661ca67d-1c4e-4f80-a931-cb9989be0907\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.798969 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghxzw\" (UniqueName: \"kubernetes.io/projected/661ca67d-1c4e-4f80-a931-cb9989be0907-kube-api-access-ghxzw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh\" (UID: \"661ca67d-1c4e-4f80-a931-cb9989be0907\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.900498 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/661ca67d-1c4e-4f80-a931-cb9989be0907-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh\" (UID: \"661ca67d-1c4e-4f80-a931-cb9989be0907\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.900569 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghxzw\" (UniqueName: \"kubernetes.io/projected/661ca67d-1c4e-4f80-a931-cb9989be0907-kube-api-access-ghxzw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh\" (UID: \"661ca67d-1c4e-4f80-a931-cb9989be0907\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.900621 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/661ca67d-1c4e-4f80-a931-cb9989be0907-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh\" (UID: \"661ca67d-1c4e-4f80-a931-cb9989be0907\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.901002 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/661ca67d-1c4e-4f80-a931-cb9989be0907-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh\" (UID: \"661ca67d-1c4e-4f80-a931-cb9989be0907\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.901258 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/661ca67d-1c4e-4f80-a931-cb9989be0907-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh\" (UID: \"661ca67d-1c4e-4f80-a931-cb9989be0907\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.933386 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghxzw\" (UniqueName: \"kubernetes.io/projected/661ca67d-1c4e-4f80-a931-cb9989be0907-kube-api-access-ghxzw\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh\" (UID: \"661ca67d-1c4e-4f80-a931-cb9989be0907\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.969957 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 15:14:00 crc kubenswrapper[4620]: I0129 15:14:00.978388 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" Jan 29 15:14:01 crc kubenswrapper[4620]: I0129 15:14:01.405825 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh"] Jan 29 15:14:01 crc kubenswrapper[4620]: I0129 15:14:01.618423 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" event={"ID":"661ca67d-1c4e-4f80-a931-cb9989be0907","Type":"ContainerStarted","Data":"cda102bda8ac5a5e4ca79a7b7885e4df7379690266665bf68575b4e1c633e6f5"} Jan 29 15:14:02 crc kubenswrapper[4620]: I0129 15:14:02.625620 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" event={"ID":"661ca67d-1c4e-4f80-a931-cb9989be0907","Type":"ContainerStarted","Data":"4b55d30b64e56dd8095824f336f25653202387fad778e441dd3fbedbfd5509a0"} Jan 29 15:14:03 crc kubenswrapper[4620]: I0129 15:14:03.632554 4620 generic.go:334] "Generic (PLEG): container finished" podID="661ca67d-1c4e-4f80-a931-cb9989be0907" containerID="4b55d30b64e56dd8095824f336f25653202387fad778e441dd3fbedbfd5509a0" exitCode=0 Jan 29 15:14:03 crc kubenswrapper[4620]: I0129 15:14:03.632599 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" event={"ID":"661ca67d-1c4e-4f80-a931-cb9989be0907","Type":"ContainerDied","Data":"4b55d30b64e56dd8095824f336f25653202387fad778e441dd3fbedbfd5509a0"} Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.110867 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.111120 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.712473 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sjwz6"] Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.721305 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.747493 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sjwz6"] Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.860212 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-catalog-content\") pod \"redhat-operators-sjwz6\" (UID: \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\") " pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.860516 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7gdh\" (UniqueName: \"kubernetes.io/projected/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-kube-api-access-g7gdh\") pod \"redhat-operators-sjwz6\" (UID: \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\") " pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.860690 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-utilities\") pod \"redhat-operators-sjwz6\" (UID: \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\") " pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.961729 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-utilities\") pod \"redhat-operators-sjwz6\" (UID: \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\") " pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.961827 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-catalog-content\") pod \"redhat-operators-sjwz6\" (UID: \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\") " pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.961875 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7gdh\" (UniqueName: \"kubernetes.io/projected/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-kube-api-access-g7gdh\") pod \"redhat-operators-sjwz6\" (UID: \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\") " pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.962611 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-utilities\") pod \"redhat-operators-sjwz6\" (UID: \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\") " pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.962722 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-catalog-content\") pod \"redhat-operators-sjwz6\" (UID: \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\") " pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:04 crc kubenswrapper[4620]: I0129 15:14:04.986516 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7gdh\" (UniqueName: \"kubernetes.io/projected/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-kube-api-access-g7gdh\") pod \"redhat-operators-sjwz6\" (UID: \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\") " pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:05 crc kubenswrapper[4620]: I0129 15:14:05.041298 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:05 crc kubenswrapper[4620]: I0129 15:14:05.281404 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sjwz6"] Jan 29 15:14:05 crc kubenswrapper[4620]: I0129 15:14:05.650834 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sjwz6" event={"ID":"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10","Type":"ContainerStarted","Data":"451fd7faacc40e18469f5278d07b8a4c2ca9580e3679c9442d1ab4dd55e7ac8d"} Jan 29 15:14:05 crc kubenswrapper[4620]: I0129 15:14:05.650880 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sjwz6" event={"ID":"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10","Type":"ContainerStarted","Data":"eb24e33d30834725ffb55263a8a372831bd01fddab42e5f17d62a4ffdc0196ba"} Jan 29 15:14:06 crc kubenswrapper[4620]: I0129 15:14:06.658114 4620 generic.go:334] "Generic (PLEG): container finished" podID="2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" containerID="451fd7faacc40e18469f5278d07b8a4c2ca9580e3679c9442d1ab4dd55e7ac8d" exitCode=0 Jan 29 15:14:06 crc kubenswrapper[4620]: I0129 15:14:06.658170 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sjwz6" event={"ID":"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10","Type":"ContainerDied","Data":"451fd7faacc40e18469f5278d07b8a4c2ca9580e3679c9442d1ab4dd55e7ac8d"} Jan 29 15:14:08 crc kubenswrapper[4620]: I0129 15:14:08.806830 4620 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 15:14:13 crc kubenswrapper[4620]: I0129 15:14:13.701208 4620 generic.go:334] "Generic (PLEG): container finished" podID="2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" containerID="aa332fa2c313e910a813221f9debbfa1cc0baafe8c2406f32dc34b495bb22f73" exitCode=0 Jan 29 15:14:13 crc kubenswrapper[4620]: I0129 15:14:13.701333 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sjwz6" event={"ID":"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10","Type":"ContainerDied","Data":"aa332fa2c313e910a813221f9debbfa1cc0baafe8c2406f32dc34b495bb22f73"} Jan 29 15:14:13 crc kubenswrapper[4620]: I0129 15:14:13.704305 4620 generic.go:334] "Generic (PLEG): container finished" podID="661ca67d-1c4e-4f80-a931-cb9989be0907" containerID="7d399d0f952e1f13433c83ed743e0f1bca81d3dffc46228533825ceacf320074" exitCode=0 Jan 29 15:14:13 crc kubenswrapper[4620]: I0129 15:14:13.704338 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" event={"ID":"661ca67d-1c4e-4f80-a931-cb9989be0907","Type":"ContainerDied","Data":"7d399d0f952e1f13433c83ed743e0f1bca81d3dffc46228533825ceacf320074"} Jan 29 15:14:14 crc kubenswrapper[4620]: I0129 15:14:14.716094 4620 generic.go:334] "Generic (PLEG): container finished" podID="661ca67d-1c4e-4f80-a931-cb9989be0907" containerID="d25cc18874e01feb2e4114e4836f2a66636dbebe9472a74fe22b34da6ca198e9" exitCode=0 Jan 29 15:14:14 crc kubenswrapper[4620]: I0129 15:14:14.716161 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" event={"ID":"661ca67d-1c4e-4f80-a931-cb9989be0907","Type":"ContainerDied","Data":"d25cc18874e01feb2e4114e4836f2a66636dbebe9472a74fe22b34da6ca198e9"} Jan 29 15:14:15 crc kubenswrapper[4620]: I0129 15:14:15.725311 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sjwz6" event={"ID":"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10","Type":"ContainerStarted","Data":"11d3c468a4817d134a731352102a144d5be9555779bf0476f36536fb5b334bc1"} Jan 29 15:14:15 crc kubenswrapper[4620]: I0129 15:14:15.751987 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sjwz6" podStartSLOduration=2.286906185 podStartE2EDuration="11.751969889s" podCreationTimestamp="2026-01-29 15:14:04 +0000 UTC" firstStartedPulling="2026-01-29 15:14:05.652166033 +0000 UTC m=+786.264993678" lastFinishedPulling="2026-01-29 15:14:15.117229737 +0000 UTC m=+795.730057382" observedRunningTime="2026-01-29 15:14:15.75012003 +0000 UTC m=+796.362947685" watchObservedRunningTime="2026-01-29 15:14:15.751969889 +0000 UTC m=+796.364797534" Jan 29 15:14:15 crc kubenswrapper[4620]: I0129 15:14:15.983677 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" Jan 29 15:14:16 crc kubenswrapper[4620]: I0129 15:14:16.106184 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/661ca67d-1c4e-4f80-a931-cb9989be0907-bundle\") pod \"661ca67d-1c4e-4f80-a931-cb9989be0907\" (UID: \"661ca67d-1c4e-4f80-a931-cb9989be0907\") " Jan 29 15:14:16 crc kubenswrapper[4620]: I0129 15:14:16.106285 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghxzw\" (UniqueName: \"kubernetes.io/projected/661ca67d-1c4e-4f80-a931-cb9989be0907-kube-api-access-ghxzw\") pod \"661ca67d-1c4e-4f80-a931-cb9989be0907\" (UID: \"661ca67d-1c4e-4f80-a931-cb9989be0907\") " Jan 29 15:14:16 crc kubenswrapper[4620]: I0129 15:14:16.106338 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/661ca67d-1c4e-4f80-a931-cb9989be0907-util\") pod \"661ca67d-1c4e-4f80-a931-cb9989be0907\" (UID: \"661ca67d-1c4e-4f80-a931-cb9989be0907\") " Jan 29 15:14:16 crc kubenswrapper[4620]: I0129 15:14:16.106827 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661ca67d-1c4e-4f80-a931-cb9989be0907-bundle" (OuterVolumeSpecName: "bundle") pod "661ca67d-1c4e-4f80-a931-cb9989be0907" (UID: "661ca67d-1c4e-4f80-a931-cb9989be0907"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:14:16 crc kubenswrapper[4620]: I0129 15:14:16.112919 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661ca67d-1c4e-4f80-a931-cb9989be0907-kube-api-access-ghxzw" (OuterVolumeSpecName: "kube-api-access-ghxzw") pod "661ca67d-1c4e-4f80-a931-cb9989be0907" (UID: "661ca67d-1c4e-4f80-a931-cb9989be0907"). InnerVolumeSpecName "kube-api-access-ghxzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:14:16 crc kubenswrapper[4620]: I0129 15:14:16.117491 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661ca67d-1c4e-4f80-a931-cb9989be0907-util" (OuterVolumeSpecName: "util") pod "661ca67d-1c4e-4f80-a931-cb9989be0907" (UID: "661ca67d-1c4e-4f80-a931-cb9989be0907"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:14:16 crc kubenswrapper[4620]: I0129 15:14:16.207431 4620 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/661ca67d-1c4e-4f80-a931-cb9989be0907-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:16 crc kubenswrapper[4620]: I0129 15:14:16.207676 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghxzw\" (UniqueName: \"kubernetes.io/projected/661ca67d-1c4e-4f80-a931-cb9989be0907-kube-api-access-ghxzw\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:16 crc kubenswrapper[4620]: I0129 15:14:16.207773 4620 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/661ca67d-1c4e-4f80-a931-cb9989be0907-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:16 crc kubenswrapper[4620]: I0129 15:14:16.732829 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" event={"ID":"661ca67d-1c4e-4f80-a931-cb9989be0907","Type":"ContainerDied","Data":"cda102bda8ac5a5e4ca79a7b7885e4df7379690266665bf68575b4e1c633e6f5"} Jan 29 15:14:16 crc kubenswrapper[4620]: I0129 15:14:16.732896 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cda102bda8ac5a5e4ca79a7b7885e4df7379690266665bf68575b4e1c633e6f5" Jan 29 15:14:16 crc kubenswrapper[4620]: I0129 15:14:16.732859 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh" Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.306034 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-7gw7j"] Jan 29 15:14:22 crc kubenswrapper[4620]: E0129 15:14:22.306772 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661ca67d-1c4e-4f80-a931-cb9989be0907" containerName="pull" Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.306786 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="661ca67d-1c4e-4f80-a931-cb9989be0907" containerName="pull" Jan 29 15:14:22 crc kubenswrapper[4620]: E0129 15:14:22.306805 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661ca67d-1c4e-4f80-a931-cb9989be0907" containerName="util" Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.306810 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="661ca67d-1c4e-4f80-a931-cb9989be0907" containerName="util" Jan 29 15:14:22 crc kubenswrapper[4620]: E0129 15:14:22.306830 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661ca67d-1c4e-4f80-a931-cb9989be0907" containerName="extract" Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.306837 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="661ca67d-1c4e-4f80-a931-cb9989be0907" containerName="extract" Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.306922 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="661ca67d-1c4e-4f80-a931-cb9989be0907" containerName="extract" Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.307264 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-7gw7j" Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.314838 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2f2b9" Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.322887 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.323447 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.333376 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-7gw7j"] Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.480354 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgkwk\" (UniqueName: \"kubernetes.io/projected/e43505da-5a95-4f4d-8b44-b67ccca2ef23-kube-api-access-fgkwk\") pod \"nmstate-operator-646758c888-7gw7j\" (UID: \"e43505da-5a95-4f4d-8b44-b67ccca2ef23\") " pod="openshift-nmstate/nmstate-operator-646758c888-7gw7j" Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.582188 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgkwk\" (UniqueName: \"kubernetes.io/projected/e43505da-5a95-4f4d-8b44-b67ccca2ef23-kube-api-access-fgkwk\") pod \"nmstate-operator-646758c888-7gw7j\" (UID: \"e43505da-5a95-4f4d-8b44-b67ccca2ef23\") " pod="openshift-nmstate/nmstate-operator-646758c888-7gw7j" Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.602940 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgkwk\" (UniqueName: \"kubernetes.io/projected/e43505da-5a95-4f4d-8b44-b67ccca2ef23-kube-api-access-fgkwk\") pod \"nmstate-operator-646758c888-7gw7j\" (UID: \"e43505da-5a95-4f4d-8b44-b67ccca2ef23\") " pod="openshift-nmstate/nmstate-operator-646758c888-7gw7j" Jan 29 15:14:22 crc kubenswrapper[4620]: I0129 15:14:22.621569 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-7gw7j" Jan 29 15:14:23 crc kubenswrapper[4620]: I0129 15:14:23.120503 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-7gw7j"] Jan 29 15:14:23 crc kubenswrapper[4620]: W0129 15:14:23.178377 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode43505da_5a95_4f4d_8b44_b67ccca2ef23.slice/crio-b32578cd752e6304fac4189e7996b637444a163c65066a50b874bb09cc49ea22 WatchSource:0}: Error finding container b32578cd752e6304fac4189e7996b637444a163c65066a50b874bb09cc49ea22: Status 404 returned error can't find the container with id b32578cd752e6304fac4189e7996b637444a163c65066a50b874bb09cc49ea22 Jan 29 15:14:23 crc kubenswrapper[4620]: I0129 15:14:23.771053 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-7gw7j" event={"ID":"e43505da-5a95-4f4d-8b44-b67ccca2ef23","Type":"ContainerStarted","Data":"b32578cd752e6304fac4189e7996b637444a163c65066a50b874bb09cc49ea22"} Jan 29 15:14:25 crc kubenswrapper[4620]: I0129 15:14:25.046091 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:25 crc kubenswrapper[4620]: I0129 15:14:25.046422 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:25 crc kubenswrapper[4620]: I0129 15:14:25.096552 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:25 crc kubenswrapper[4620]: I0129 15:14:25.814873 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:27 crc kubenswrapper[4620]: I0129 15:14:27.232601 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sjwz6"] Jan 29 15:14:28 crc kubenswrapper[4620]: I0129 15:14:28.801513 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sjwz6" podUID="2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" containerName="registry-server" containerID="cri-o://11d3c468a4817d134a731352102a144d5be9555779bf0476f36536fb5b334bc1" gracePeriod=2 Jan 29 15:14:29 crc kubenswrapper[4620]: I0129 15:14:29.807962 4620 generic.go:334] "Generic (PLEG): container finished" podID="2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" containerID="11d3c468a4817d134a731352102a144d5be9555779bf0476f36536fb5b334bc1" exitCode=0 Jan 29 15:14:29 crc kubenswrapper[4620]: I0129 15:14:29.808003 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sjwz6" event={"ID":"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10","Type":"ContainerDied","Data":"11d3c468a4817d134a731352102a144d5be9555779bf0476f36536fb5b334bc1"} Jan 29 15:14:32 crc kubenswrapper[4620]: I0129 15:14:32.825579 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sjwz6" event={"ID":"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10","Type":"ContainerDied","Data":"eb24e33d30834725ffb55263a8a372831bd01fddab42e5f17d62a4ffdc0196ba"} Jan 29 15:14:32 crc kubenswrapper[4620]: I0129 15:14:32.826131 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb24e33d30834725ffb55263a8a372831bd01fddab42e5f17d62a4ffdc0196ba" Jan 29 15:14:32 crc kubenswrapper[4620]: I0129 15:14:32.837219 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.014146 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-utilities\") pod \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\" (UID: \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\") " Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.014257 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-catalog-content\") pod \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\" (UID: \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\") " Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.014310 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7gdh\" (UniqueName: \"kubernetes.io/projected/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-kube-api-access-g7gdh\") pod \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\" (UID: \"2c5a1b97-7ab8-4e88-9a1f-3042e4526e10\") " Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.016076 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-utilities" (OuterVolumeSpecName: "utilities") pod "2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" (UID: "2c5a1b97-7ab8-4e88-9a1f-3042e4526e10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.020959 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-kube-api-access-g7gdh" (OuterVolumeSpecName: "kube-api-access-g7gdh") pod "2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" (UID: "2c5a1b97-7ab8-4e88-9a1f-3042e4526e10"). InnerVolumeSpecName "kube-api-access-g7gdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.116250 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7gdh\" (UniqueName: \"kubernetes.io/projected/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-kube-api-access-g7gdh\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.116282 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.139252 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" (UID: "2c5a1b97-7ab8-4e88-9a1f-3042e4526e10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.217288 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.832231 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-7gw7j" event={"ID":"e43505da-5a95-4f4d-8b44-b67ccca2ef23","Type":"ContainerStarted","Data":"83f61abd2388c6c706c0cb11c6d9389f97710ce7cdb9bf4be01119fe5ebb347d"} Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.832266 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sjwz6" Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.855594 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-7gw7j" podStartSLOduration=2.1401584590000002 podStartE2EDuration="11.85557367s" podCreationTimestamp="2026-01-29 15:14:22 +0000 UTC" firstStartedPulling="2026-01-29 15:14:23.181416154 +0000 UTC m=+803.794243799" lastFinishedPulling="2026-01-29 15:14:32.896831365 +0000 UTC m=+813.509659010" observedRunningTime="2026-01-29 15:14:33.853080381 +0000 UTC m=+814.465908026" watchObservedRunningTime="2026-01-29 15:14:33.85557367 +0000 UTC m=+814.468401315" Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.881451 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sjwz6"] Jan 29 15:14:33 crc kubenswrapper[4620]: I0129 15:14:33.881890 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sjwz6"] Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.111472 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.111537 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.111582 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.112222 4620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66781c2a016809706f71009a78161c76f619ea542b24dd9a5d78b07cc0a0ddc6"} pod="openshift-machine-config-operator/machine-config-daemon-7469t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.112278 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" containerID="cri-o://66781c2a016809706f71009a78161c76f619ea542b24dd9a5d78b07cc0a0ddc6" gracePeriod=600 Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.839740 4620 generic.go:334] "Generic (PLEG): container finished" podID="a76cce43-3d01-4158-b23a-e21fd5927792" containerID="66781c2a016809706f71009a78161c76f619ea542b24dd9a5d78b07cc0a0ddc6" exitCode=0 Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.839808 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerDied","Data":"66781c2a016809706f71009a78161c76f619ea542b24dd9a5d78b07cc0a0ddc6"} Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.839899 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"530d4b8825c86a1d226272eaddfd8776e92faf5ad624ab12e26dd0d4fe879bf7"} Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.840682 4620 scope.go:117] "RemoveContainer" containerID="2393df41f01aa6bed609d7949bcef8efd536dd6c74fffd44de6b7f3407f9f534" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.879213 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" path="/var/lib/kubelet/pods/2c5a1b97-7ab8-4e88-9a1f-3042e4526e10/volumes" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.952574 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-qc5bc"] Jan 29 15:14:34 crc kubenswrapper[4620]: E0129 15:14:34.952836 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" containerName="extract-content" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.952857 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" containerName="extract-content" Jan 29 15:14:34 crc kubenswrapper[4620]: E0129 15:14:34.952879 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" containerName="extract-utilities" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.952887 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" containerName="extract-utilities" Jan 29 15:14:34 crc kubenswrapper[4620]: E0129 15:14:34.952904 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" containerName="registry-server" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.952913 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" containerName="registry-server" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.953045 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c5a1b97-7ab8-4e88-9a1f-3042e4526e10" containerName="registry-server" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.953736 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-qc5bc" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.957557 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hb28w" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.972706 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94"] Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.973376 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.980420 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 15:14:34 crc kubenswrapper[4620]: I0129 15:14:34.989721 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-qc5bc"] Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.017871 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-fbf99"] Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.018993 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.025387 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94"] Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.143997 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgzvg\" (UniqueName: \"kubernetes.io/projected/2fe4a43f-5be7-4f39-9698-53a404bc411e-kube-api-access-fgzvg\") pod \"nmstate-webhook-8474b5b9d8-4nx94\" (UID: \"2fe4a43f-5be7-4f39-9698-53a404bc411e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.144054 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2fe4a43f-5be7-4f39-9698-53a404bc411e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-4nx94\" (UID: \"2fe4a43f-5be7-4f39-9698-53a404bc411e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.144089 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f7f18598-e66f-49aa-855e-54ba8e4abced-ovs-socket\") pod \"nmstate-handler-fbf99\" (UID: \"f7f18598-e66f-49aa-855e-54ba8e4abced\") " pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.144135 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqxgx\" (UniqueName: \"kubernetes.io/projected/f7f18598-e66f-49aa-855e-54ba8e4abced-kube-api-access-kqxgx\") pod \"nmstate-handler-fbf99\" (UID: \"f7f18598-e66f-49aa-855e-54ba8e4abced\") " pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.144164 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f7f18598-e66f-49aa-855e-54ba8e4abced-nmstate-lock\") pod \"nmstate-handler-fbf99\" (UID: \"f7f18598-e66f-49aa-855e-54ba8e4abced\") " pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.144184 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk2nz\" (UniqueName: \"kubernetes.io/projected/ff1f7291-09f7-47b1-b883-6d473cfd46e2-kube-api-access-kk2nz\") pod \"nmstate-metrics-54757c584b-qc5bc\" (UID: \"ff1f7291-09f7-47b1-b883-6d473cfd46e2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-qc5bc" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.144217 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f7f18598-e66f-49aa-855e-54ba8e4abced-dbus-socket\") pod \"nmstate-handler-fbf99\" (UID: \"f7f18598-e66f-49aa-855e-54ba8e4abced\") " pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.149810 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2"] Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.150635 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.155197 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.155288 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.155337 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-tvv9d" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.161176 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2"] Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.245124 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f7f18598-e66f-49aa-855e-54ba8e4abced-dbus-socket\") pod \"nmstate-handler-fbf99\" (UID: \"f7f18598-e66f-49aa-855e-54ba8e4abced\") " pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.245200 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgzvg\" (UniqueName: \"kubernetes.io/projected/2fe4a43f-5be7-4f39-9698-53a404bc411e-kube-api-access-fgzvg\") pod \"nmstate-webhook-8474b5b9d8-4nx94\" (UID: \"2fe4a43f-5be7-4f39-9698-53a404bc411e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.245227 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2fe4a43f-5be7-4f39-9698-53a404bc411e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-4nx94\" (UID: \"2fe4a43f-5be7-4f39-9698-53a404bc411e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.245265 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f7f18598-e66f-49aa-855e-54ba8e4abced-ovs-socket\") pod \"nmstate-handler-fbf99\" (UID: \"f7f18598-e66f-49aa-855e-54ba8e4abced\") " pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.245290 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s67hl\" (UniqueName: \"kubernetes.io/projected/316915e8-4161-4c98-927b-0434cbd2df0b-kube-api-access-s67hl\") pod \"nmstate-console-plugin-7754f76f8b-8c7r2\" (UID: \"316915e8-4161-4c98-927b-0434cbd2df0b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.245332 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/316915e8-4161-4c98-927b-0434cbd2df0b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8c7r2\" (UID: \"316915e8-4161-4c98-927b-0434cbd2df0b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.245363 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqxgx\" (UniqueName: \"kubernetes.io/projected/f7f18598-e66f-49aa-855e-54ba8e4abced-kube-api-access-kqxgx\") pod \"nmstate-handler-fbf99\" (UID: \"f7f18598-e66f-49aa-855e-54ba8e4abced\") " pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.245395 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk2nz\" (UniqueName: \"kubernetes.io/projected/ff1f7291-09f7-47b1-b883-6d473cfd46e2-kube-api-access-kk2nz\") pod \"nmstate-metrics-54757c584b-qc5bc\" (UID: \"ff1f7291-09f7-47b1-b883-6d473cfd46e2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-qc5bc" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.245416 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f7f18598-e66f-49aa-855e-54ba8e4abced-nmstate-lock\") pod \"nmstate-handler-fbf99\" (UID: \"f7f18598-e66f-49aa-855e-54ba8e4abced\") " pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.245448 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/316915e8-4161-4c98-927b-0434cbd2df0b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8c7r2\" (UID: \"316915e8-4161-4c98-927b-0434cbd2df0b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.245816 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f7f18598-e66f-49aa-855e-54ba8e4abced-dbus-socket\") pod \"nmstate-handler-fbf99\" (UID: \"f7f18598-e66f-49aa-855e-54ba8e4abced\") " pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.246392 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f7f18598-e66f-49aa-855e-54ba8e4abced-ovs-socket\") pod \"nmstate-handler-fbf99\" (UID: \"f7f18598-e66f-49aa-855e-54ba8e4abced\") " pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.246806 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f7f18598-e66f-49aa-855e-54ba8e4abced-nmstate-lock\") pod \"nmstate-handler-fbf99\" (UID: \"f7f18598-e66f-49aa-855e-54ba8e4abced\") " pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.265857 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgzvg\" (UniqueName: \"kubernetes.io/projected/2fe4a43f-5be7-4f39-9698-53a404bc411e-kube-api-access-fgzvg\") pod \"nmstate-webhook-8474b5b9d8-4nx94\" (UID: \"2fe4a43f-5be7-4f39-9698-53a404bc411e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.266440 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk2nz\" (UniqueName: \"kubernetes.io/projected/ff1f7291-09f7-47b1-b883-6d473cfd46e2-kube-api-access-kk2nz\") pod \"nmstate-metrics-54757c584b-qc5bc\" (UID: \"ff1f7291-09f7-47b1-b883-6d473cfd46e2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-qc5bc" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.272434 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-qc5bc" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.273478 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/2fe4a43f-5be7-4f39-9698-53a404bc411e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-4nx94\" (UID: \"2fe4a43f-5be7-4f39-9698-53a404bc411e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.287635 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.307563 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqxgx\" (UniqueName: \"kubernetes.io/projected/f7f18598-e66f-49aa-855e-54ba8e4abced-kube-api-access-kqxgx\") pod \"nmstate-handler-fbf99\" (UID: \"f7f18598-e66f-49aa-855e-54ba8e4abced\") " pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.338376 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.346855 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/316915e8-4161-4c98-927b-0434cbd2df0b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8c7r2\" (UID: \"316915e8-4161-4c98-927b-0434cbd2df0b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.346935 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/316915e8-4161-4c98-927b-0434cbd2df0b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8c7r2\" (UID: \"316915e8-4161-4c98-927b-0434cbd2df0b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.346999 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s67hl\" (UniqueName: \"kubernetes.io/projected/316915e8-4161-4c98-927b-0434cbd2df0b-kube-api-access-s67hl\") pod \"nmstate-console-plugin-7754f76f8b-8c7r2\" (UID: \"316915e8-4161-4c98-927b-0434cbd2df0b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.348429 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/316915e8-4161-4c98-927b-0434cbd2df0b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8c7r2\" (UID: \"316915e8-4161-4c98-927b-0434cbd2df0b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.351327 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/316915e8-4161-4c98-927b-0434cbd2df0b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8c7r2\" (UID: \"316915e8-4161-4c98-927b-0434cbd2df0b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.368209 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5bf868b88d-6k4t4"] Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.369059 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.375815 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s67hl\" (UniqueName: \"kubernetes.io/projected/316915e8-4161-4c98-927b-0434cbd2df0b-kube-api-access-s67hl\") pod \"nmstate-console-plugin-7754f76f8b-8c7r2\" (UID: \"316915e8-4161-4c98-927b-0434cbd2df0b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.405849 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5bf868b88d-6k4t4"] Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.448722 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45chz\" (UniqueName: \"kubernetes.io/projected/b341510d-240d-47b5-8b75-99b84fe51cdf-kube-api-access-45chz\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.448854 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b341510d-240d-47b5-8b75-99b84fe51cdf-oauth-serving-cert\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.448892 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b341510d-240d-47b5-8b75-99b84fe51cdf-console-oauth-config\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.448917 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b341510d-240d-47b5-8b75-99b84fe51cdf-trusted-ca-bundle\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.448953 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b341510d-240d-47b5-8b75-99b84fe51cdf-console-config\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.448975 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b341510d-240d-47b5-8b75-99b84fe51cdf-console-serving-cert\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.448993 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b341510d-240d-47b5-8b75-99b84fe51cdf-service-ca\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.472496 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.549679 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b341510d-240d-47b5-8b75-99b84fe51cdf-console-oauth-config\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.551381 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b341510d-240d-47b5-8b75-99b84fe51cdf-trusted-ca-bundle\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.551440 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b341510d-240d-47b5-8b75-99b84fe51cdf-console-config\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.551466 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b341510d-240d-47b5-8b75-99b84fe51cdf-console-serving-cert\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.551488 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b341510d-240d-47b5-8b75-99b84fe51cdf-service-ca\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.551613 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45chz\" (UniqueName: \"kubernetes.io/projected/b341510d-240d-47b5-8b75-99b84fe51cdf-kube-api-access-45chz\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.551721 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b341510d-240d-47b5-8b75-99b84fe51cdf-oauth-serving-cert\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.552628 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b341510d-240d-47b5-8b75-99b84fe51cdf-oauth-serving-cert\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.553294 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b341510d-240d-47b5-8b75-99b84fe51cdf-service-ca\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.554326 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b341510d-240d-47b5-8b75-99b84fe51cdf-console-config\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.554933 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b341510d-240d-47b5-8b75-99b84fe51cdf-trusted-ca-bundle\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.559416 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b341510d-240d-47b5-8b75-99b84fe51cdf-console-serving-cert\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.560377 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b341510d-240d-47b5-8b75-99b84fe51cdf-console-oauth-config\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.572719 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45chz\" (UniqueName: \"kubernetes.io/projected/b341510d-240d-47b5-8b75-99b84fe51cdf-kube-api-access-45chz\") pod \"console-5bf868b88d-6k4t4\" (UID: \"b341510d-240d-47b5-8b75-99b84fe51cdf\") " pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.600420 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-qc5bc"] Jan 29 15:14:35 crc kubenswrapper[4620]: W0129 15:14:35.610080 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff1f7291_09f7_47b1_b883_6d473cfd46e2.slice/crio-7c01c4974c3752dc58cc7771dea31086d24f50d0b3cdcd4102b33f0307c46a8f WatchSource:0}: Error finding container 7c01c4974c3752dc58cc7771dea31086d24f50d0b3cdcd4102b33f0307c46a8f: Status 404 returned error can't find the container with id 7c01c4974c3752dc58cc7771dea31086d24f50d0b3cdcd4102b33f0307c46a8f Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.691636 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.774632 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2"] Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.846207 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-qc5bc" event={"ID":"ff1f7291-09f7-47b1-b883-6d473cfd46e2","Type":"ContainerStarted","Data":"7c01c4974c3752dc58cc7771dea31086d24f50d0b3cdcd4102b33f0307c46a8f"} Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.847054 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" event={"ID":"316915e8-4161-4c98-927b-0434cbd2df0b","Type":"ContainerStarted","Data":"467bfe1acb8adcd53ff0c9b025253b78054e56b5825d6fe293f206474134475d"} Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.851949 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fbf99" event={"ID":"f7f18598-e66f-49aa-855e-54ba8e4abced","Type":"ContainerStarted","Data":"a9f764f323d5b6fa7917e48396224032334f6bbf0df9a181d95740d1030d3e54"} Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.869786 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94"] Jan 29 15:14:35 crc kubenswrapper[4620]: I0129 15:14:35.940842 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5bf868b88d-6k4t4"] Jan 29 15:14:35 crc kubenswrapper[4620]: W0129 15:14:35.951197 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb341510d_240d_47b5_8b75_99b84fe51cdf.slice/crio-7f61b8eff5512e070204f2fd47e777e434acab45902640689e221804c2d4bbf8 WatchSource:0}: Error finding container 7f61b8eff5512e070204f2fd47e777e434acab45902640689e221804c2d4bbf8: Status 404 returned error can't find the container with id 7f61b8eff5512e070204f2fd47e777e434acab45902640689e221804c2d4bbf8 Jan 29 15:14:36 crc kubenswrapper[4620]: I0129 15:14:36.857871 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bf868b88d-6k4t4" event={"ID":"b341510d-240d-47b5-8b75-99b84fe51cdf","Type":"ContainerStarted","Data":"ff0b80f18060e808f7697a315e80eab62c699d18b2fded4f5bed91dcc0253990"} Jan 29 15:14:36 crc kubenswrapper[4620]: I0129 15:14:36.858167 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5bf868b88d-6k4t4" event={"ID":"b341510d-240d-47b5-8b75-99b84fe51cdf","Type":"ContainerStarted","Data":"7f61b8eff5512e070204f2fd47e777e434acab45902640689e221804c2d4bbf8"} Jan 29 15:14:36 crc kubenswrapper[4620]: I0129 15:14:36.859106 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94" event={"ID":"2fe4a43f-5be7-4f39-9698-53a404bc411e","Type":"ContainerStarted","Data":"9da9c2ad7f057bd61b02ee2c9f1350d0bf493eb4dcf3939a1ea692f7379b8f81"} Jan 29 15:14:36 crc kubenswrapper[4620]: I0129 15:14:36.889047 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5bf868b88d-6k4t4" podStartSLOduration=1.889024546 podStartE2EDuration="1.889024546s" podCreationTimestamp="2026-01-29 15:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:14:36.881301042 +0000 UTC m=+817.494128697" watchObservedRunningTime="2026-01-29 15:14:36.889024546 +0000 UTC m=+817.501852191" Jan 29 15:14:44 crc kubenswrapper[4620]: I0129 15:14:44.907337 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94" event={"ID":"2fe4a43f-5be7-4f39-9698-53a404bc411e","Type":"ContainerStarted","Data":"ab285563616fd6c23fdffdfa654070c8e10e8afe17d877aec4cafca9cc5a01e3"} Jan 29 15:14:44 crc kubenswrapper[4620]: I0129 15:14:44.908096 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94" Jan 29 15:14:44 crc kubenswrapper[4620]: I0129 15:14:44.909653 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fbf99" event={"ID":"f7f18598-e66f-49aa-855e-54ba8e4abced","Type":"ContainerStarted","Data":"65901affd5a043aceaf74463055e6ce38d1b1fd8e9faca5e3a9bda61ff44f94a"} Jan 29 15:14:44 crc kubenswrapper[4620]: I0129 15:14:44.911189 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:44 crc kubenswrapper[4620]: I0129 15:14:44.916623 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-qc5bc" event={"ID":"ff1f7291-09f7-47b1-b883-6d473cfd46e2","Type":"ContainerStarted","Data":"462e3345f6131daa38602b1cd28b2b2f1fa429d52c5030a74690353d55761a21"} Jan 29 15:14:44 crc kubenswrapper[4620]: I0129 15:14:44.918507 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" event={"ID":"316915e8-4161-4c98-927b-0434cbd2df0b","Type":"ContainerStarted","Data":"4534bf31785e3c48cd45850e8945c10879527e49dacaf097607555564c698cf9"} Jan 29 15:14:44 crc kubenswrapper[4620]: I0129 15:14:44.927332 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94" podStartSLOduration=3.11838852 podStartE2EDuration="10.926180441s" podCreationTimestamp="2026-01-29 15:14:34 +0000 UTC" firstStartedPulling="2026-01-29 15:14:35.883784157 +0000 UTC m=+816.496611802" lastFinishedPulling="2026-01-29 15:14:43.691576078 +0000 UTC m=+824.304403723" observedRunningTime="2026-01-29 15:14:44.922611818 +0000 UTC m=+825.535439503" watchObservedRunningTime="2026-01-29 15:14:44.926180441 +0000 UTC m=+825.539008086" Jan 29 15:14:44 crc kubenswrapper[4620]: I0129 15:14:44.944662 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8c7r2" podStartSLOduration=1.977939635 podStartE2EDuration="9.944644012s" podCreationTimestamp="2026-01-29 15:14:35 +0000 UTC" firstStartedPulling="2026-01-29 15:14:35.792370557 +0000 UTC m=+816.405198202" lastFinishedPulling="2026-01-29 15:14:43.759074934 +0000 UTC m=+824.371902579" observedRunningTime="2026-01-29 15:14:44.942377311 +0000 UTC m=+825.555204966" watchObservedRunningTime="2026-01-29 15:14:44.944644012 +0000 UTC m=+825.557471657" Jan 29 15:14:44 crc kubenswrapper[4620]: I0129 15:14:44.973300 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-fbf99" podStartSLOduration=2.642422609 podStartE2EDuration="10.973284844s" podCreationTimestamp="2026-01-29 15:14:34 +0000 UTC" firstStartedPulling="2026-01-29 15:14:35.426920798 +0000 UTC m=+816.039748443" lastFinishedPulling="2026-01-29 15:14:43.757783033 +0000 UTC m=+824.370610678" observedRunningTime="2026-01-29 15:14:44.971220039 +0000 UTC m=+825.584047694" watchObservedRunningTime="2026-01-29 15:14:44.973284844 +0000 UTC m=+825.586112489" Jan 29 15:14:45 crc kubenswrapper[4620]: I0129 15:14:45.693214 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:45 crc kubenswrapper[4620]: I0129 15:14:45.693252 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:45 crc kubenswrapper[4620]: I0129 15:14:45.698025 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:45 crc kubenswrapper[4620]: I0129 15:14:45.926420 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5bf868b88d-6k4t4" Jan 29 15:14:45 crc kubenswrapper[4620]: I0129 15:14:45.985705 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-z57gf"] Jan 29 15:14:50 crc kubenswrapper[4620]: I0129 15:14:50.366016 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-fbf99" Jan 29 15:14:51 crc kubenswrapper[4620]: I0129 15:14:51.959149 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-qc5bc" event={"ID":"ff1f7291-09f7-47b1-b883-6d473cfd46e2","Type":"ContainerStarted","Data":"0fcd796a6b8852539b6818b0e7b88d5f42d306d91f6363f7fa3cc1e6047679d2"} Jan 29 15:14:51 crc kubenswrapper[4620]: I0129 15:14:51.976882 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-qc5bc" podStartSLOduration=2.39698576 podStartE2EDuration="17.976862118s" podCreationTimestamp="2026-01-29 15:14:34 +0000 UTC" firstStartedPulling="2026-01-29 15:14:35.611676877 +0000 UTC m=+816.224504522" lastFinishedPulling="2026-01-29 15:14:51.191553235 +0000 UTC m=+831.804380880" observedRunningTime="2026-01-29 15:14:51.975294979 +0000 UTC m=+832.588122624" watchObservedRunningTime="2026-01-29 15:14:51.976862118 +0000 UTC m=+832.589689783" Jan 29 15:14:55 crc kubenswrapper[4620]: I0129 15:14:55.294970 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-4nx94" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.148454 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d"] Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.149442 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.153117 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.153149 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.170423 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d"] Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.285987 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75672602-dcbe-4980-9e12-93347db787b3-secret-volume\") pod \"collect-profiles-29494995-flx9d\" (UID: \"75672602-dcbe-4980-9e12-93347db787b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.286602 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75672602-dcbe-4980-9e12-93347db787b3-config-volume\") pod \"collect-profiles-29494995-flx9d\" (UID: \"75672602-dcbe-4980-9e12-93347db787b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.286715 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmz4\" (UniqueName: \"kubernetes.io/projected/75672602-dcbe-4980-9e12-93347db787b3-kube-api-access-dlmz4\") pod \"collect-profiles-29494995-flx9d\" (UID: \"75672602-dcbe-4980-9e12-93347db787b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.387958 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75672602-dcbe-4980-9e12-93347db787b3-secret-volume\") pod \"collect-profiles-29494995-flx9d\" (UID: \"75672602-dcbe-4980-9e12-93347db787b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.388000 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75672602-dcbe-4980-9e12-93347db787b3-config-volume\") pod \"collect-profiles-29494995-flx9d\" (UID: \"75672602-dcbe-4980-9e12-93347db787b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.388020 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlmz4\" (UniqueName: \"kubernetes.io/projected/75672602-dcbe-4980-9e12-93347db787b3-kube-api-access-dlmz4\") pod \"collect-profiles-29494995-flx9d\" (UID: \"75672602-dcbe-4980-9e12-93347db787b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.390966 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75672602-dcbe-4980-9e12-93347db787b3-config-volume\") pod \"collect-profiles-29494995-flx9d\" (UID: \"75672602-dcbe-4980-9e12-93347db787b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.404139 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlmz4\" (UniqueName: \"kubernetes.io/projected/75672602-dcbe-4980-9e12-93347db787b3-kube-api-access-dlmz4\") pod \"collect-profiles-29494995-flx9d\" (UID: \"75672602-dcbe-4980-9e12-93347db787b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.410285 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75672602-dcbe-4980-9e12-93347db787b3-secret-volume\") pod \"collect-profiles-29494995-flx9d\" (UID: \"75672602-dcbe-4980-9e12-93347db787b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.508645 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" Jan 29 15:15:00 crc kubenswrapper[4620]: I0129 15:15:00.704001 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d"] Jan 29 15:15:00 crc kubenswrapper[4620]: W0129 15:15:00.716593 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75672602_dcbe_4980_9e12_93347db787b3.slice/crio-49892e330e93e46836dde895a945a6096103a9ac2aee66ad60d808e26cb844c8 WatchSource:0}: Error finding container 49892e330e93e46836dde895a945a6096103a9ac2aee66ad60d808e26cb844c8: Status 404 returned error can't find the container with id 49892e330e93e46836dde895a945a6096103a9ac2aee66ad60d808e26cb844c8 Jan 29 15:15:01 crc kubenswrapper[4620]: I0129 15:15:01.021143 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" event={"ID":"75672602-dcbe-4980-9e12-93347db787b3","Type":"ContainerStarted","Data":"49892e330e93e46836dde895a945a6096103a9ac2aee66ad60d808e26cb844c8"} Jan 29 15:15:02 crc kubenswrapper[4620]: I0129 15:15:02.068389 4620 generic.go:334] "Generic (PLEG): container finished" podID="75672602-dcbe-4980-9e12-93347db787b3" containerID="88824b38039572dd7a781cb6ff75ce500963cb9316fc8134dbd32950489b6617" exitCode=0 Jan 29 15:15:02 crc kubenswrapper[4620]: I0129 15:15:02.068480 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" event={"ID":"75672602-dcbe-4980-9e12-93347db787b3","Type":"ContainerDied","Data":"88824b38039572dd7a781cb6ff75ce500963cb9316fc8134dbd32950489b6617"} Jan 29 15:15:03 crc kubenswrapper[4620]: I0129 15:15:03.307430 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" Jan 29 15:15:03 crc kubenswrapper[4620]: I0129 15:15:03.469300 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlmz4\" (UniqueName: \"kubernetes.io/projected/75672602-dcbe-4980-9e12-93347db787b3-kube-api-access-dlmz4\") pod \"75672602-dcbe-4980-9e12-93347db787b3\" (UID: \"75672602-dcbe-4980-9e12-93347db787b3\") " Jan 29 15:15:03 crc kubenswrapper[4620]: I0129 15:15:03.469531 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75672602-dcbe-4980-9e12-93347db787b3-config-volume\") pod \"75672602-dcbe-4980-9e12-93347db787b3\" (UID: \"75672602-dcbe-4980-9e12-93347db787b3\") " Jan 29 15:15:03 crc kubenswrapper[4620]: I0129 15:15:03.469565 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75672602-dcbe-4980-9e12-93347db787b3-secret-volume\") pod \"75672602-dcbe-4980-9e12-93347db787b3\" (UID: \"75672602-dcbe-4980-9e12-93347db787b3\") " Jan 29 15:15:03 crc kubenswrapper[4620]: I0129 15:15:03.470821 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75672602-dcbe-4980-9e12-93347db787b3-config-volume" (OuterVolumeSpecName: "config-volume") pod "75672602-dcbe-4980-9e12-93347db787b3" (UID: "75672602-dcbe-4980-9e12-93347db787b3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:03 crc kubenswrapper[4620]: I0129 15:15:03.475084 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75672602-dcbe-4980-9e12-93347db787b3-kube-api-access-dlmz4" (OuterVolumeSpecName: "kube-api-access-dlmz4") pod "75672602-dcbe-4980-9e12-93347db787b3" (UID: "75672602-dcbe-4980-9e12-93347db787b3"). InnerVolumeSpecName "kube-api-access-dlmz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:15:03 crc kubenswrapper[4620]: I0129 15:15:03.475077 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75672602-dcbe-4980-9e12-93347db787b3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75672602-dcbe-4980-9e12-93347db787b3" (UID: "75672602-dcbe-4980-9e12-93347db787b3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:03 crc kubenswrapper[4620]: I0129 15:15:03.571106 4620 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75672602-dcbe-4980-9e12-93347db787b3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:03 crc kubenswrapper[4620]: I0129 15:15:03.571151 4620 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75672602-dcbe-4980-9e12-93347db787b3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:03 crc kubenswrapper[4620]: I0129 15:15:03.571162 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlmz4\" (UniqueName: \"kubernetes.io/projected/75672602-dcbe-4980-9e12-93347db787b3-kube-api-access-dlmz4\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:04 crc kubenswrapper[4620]: I0129 15:15:04.085031 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" event={"ID":"75672602-dcbe-4980-9e12-93347db787b3","Type":"ContainerDied","Data":"49892e330e93e46836dde895a945a6096103a9ac2aee66ad60d808e26cb844c8"} Jan 29 15:15:04 crc kubenswrapper[4620]: I0129 15:15:04.085589 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49892e330e93e46836dde895a945a6096103a9ac2aee66ad60d808e26cb844c8" Jan 29 15:15:04 crc kubenswrapper[4620]: I0129 15:15:04.085059 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-flx9d" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.239020 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx"] Jan 29 15:15:08 crc kubenswrapper[4620]: E0129 15:15:08.240827 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75672602-dcbe-4980-9e12-93347db787b3" containerName="collect-profiles" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.240917 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="75672602-dcbe-4980-9e12-93347db787b3" containerName="collect-profiles" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.241063 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="75672602-dcbe-4980-9e12-93347db787b3" containerName="collect-profiles" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.241890 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.247805 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.256017 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx"] Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.443088 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2b74701-552f-4983-9361-3889af3c6c25-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx\" (UID: \"c2b74701-552f-4983-9361-3889af3c6c25\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.443389 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttt55\" (UniqueName: \"kubernetes.io/projected/c2b74701-552f-4983-9361-3889af3c6c25-kube-api-access-ttt55\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx\" (UID: \"c2b74701-552f-4983-9361-3889af3c6c25\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.443454 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2b74701-552f-4983-9361-3889af3c6c25-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx\" (UID: \"c2b74701-552f-4983-9361-3889af3c6c25\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.544167 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttt55\" (UniqueName: \"kubernetes.io/projected/c2b74701-552f-4983-9361-3889af3c6c25-kube-api-access-ttt55\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx\" (UID: \"c2b74701-552f-4983-9361-3889af3c6c25\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.544224 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2b74701-552f-4983-9361-3889af3c6c25-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx\" (UID: \"c2b74701-552f-4983-9361-3889af3c6c25\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.544265 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2b74701-552f-4983-9361-3889af3c6c25-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx\" (UID: \"c2b74701-552f-4983-9361-3889af3c6c25\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.544994 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2b74701-552f-4983-9361-3889af3c6c25-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx\" (UID: \"c2b74701-552f-4983-9361-3889af3c6c25\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.545015 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2b74701-552f-4983-9361-3889af3c6c25-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx\" (UID: \"c2b74701-552f-4983-9361-3889af3c6c25\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.566525 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttt55\" (UniqueName: \"kubernetes.io/projected/c2b74701-552f-4983-9361-3889af3c6c25-kube-api-access-ttt55\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx\" (UID: \"c2b74701-552f-4983-9361-3889af3c6c25\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" Jan 29 15:15:08 crc kubenswrapper[4620]: I0129 15:15:08.860850 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" Jan 29 15:15:09 crc kubenswrapper[4620]: I0129 15:15:09.285407 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx"] Jan 29 15:15:10 crc kubenswrapper[4620]: I0129 15:15:10.120703 4620 generic.go:334] "Generic (PLEG): container finished" podID="c2b74701-552f-4983-9361-3889af3c6c25" containerID="5380a528f2925eb4068f22a84131ec11f030cd1cb3318efc6c5a7e06025cb8e9" exitCode=0 Jan 29 15:15:10 crc kubenswrapper[4620]: I0129 15:15:10.120904 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" event={"ID":"c2b74701-552f-4983-9361-3889af3c6c25","Type":"ContainerDied","Data":"5380a528f2925eb4068f22a84131ec11f030cd1cb3318efc6c5a7e06025cb8e9"} Jan 29 15:15:10 crc kubenswrapper[4620]: I0129 15:15:10.121039 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" event={"ID":"c2b74701-552f-4983-9361-3889af3c6c25","Type":"ContainerStarted","Data":"fdf83df8dd3d3bf388b11bda603b56e76ad37997d3f3f25f86fd52398816bd38"} Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.052591 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-z57gf" podUID="aa662f18-6ab4-43b8-8e65-8de41043b74d" containerName="console" containerID="cri-o://2704f88e1799bacba75f2bfa870b35017328409f120ee96b0a38a88ef4ac79a7" gracePeriod=15 Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.458283 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-z57gf_aa662f18-6ab4-43b8-8e65-8de41043b74d/console/0.log" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.458673 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.583708 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-service-ca\") pod \"aa662f18-6ab4-43b8-8e65-8de41043b74d\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.583785 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lgkk\" (UniqueName: \"kubernetes.io/projected/aa662f18-6ab4-43b8-8e65-8de41043b74d-kube-api-access-2lgkk\") pod \"aa662f18-6ab4-43b8-8e65-8de41043b74d\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.583819 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-oauth-serving-cert\") pod \"aa662f18-6ab4-43b8-8e65-8de41043b74d\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.583839 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-config\") pod \"aa662f18-6ab4-43b8-8e65-8de41043b74d\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.583892 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-oauth-config\") pod \"aa662f18-6ab4-43b8-8e65-8de41043b74d\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.583925 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-trusted-ca-bundle\") pod \"aa662f18-6ab4-43b8-8e65-8de41043b74d\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.583951 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-serving-cert\") pod \"aa662f18-6ab4-43b8-8e65-8de41043b74d\" (UID: \"aa662f18-6ab4-43b8-8e65-8de41043b74d\") " Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.585149 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "aa662f18-6ab4-43b8-8e65-8de41043b74d" (UID: "aa662f18-6ab4-43b8-8e65-8de41043b74d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.585169 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "aa662f18-6ab4-43b8-8e65-8de41043b74d" (UID: "aa662f18-6ab4-43b8-8e65-8de41043b74d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.585136 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-config" (OuterVolumeSpecName: "console-config") pod "aa662f18-6ab4-43b8-8e65-8de41043b74d" (UID: "aa662f18-6ab4-43b8-8e65-8de41043b74d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.585478 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-service-ca" (OuterVolumeSpecName: "service-ca") pod "aa662f18-6ab4-43b8-8e65-8de41043b74d" (UID: "aa662f18-6ab4-43b8-8e65-8de41043b74d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.590479 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa662f18-6ab4-43b8-8e65-8de41043b74d-kube-api-access-2lgkk" (OuterVolumeSpecName: "kube-api-access-2lgkk") pod "aa662f18-6ab4-43b8-8e65-8de41043b74d" (UID: "aa662f18-6ab4-43b8-8e65-8de41043b74d"). InnerVolumeSpecName "kube-api-access-2lgkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.591499 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "aa662f18-6ab4-43b8-8e65-8de41043b74d" (UID: "aa662f18-6ab4-43b8-8e65-8de41043b74d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.592648 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "aa662f18-6ab4-43b8-8e65-8de41043b74d" (UID: "aa662f18-6ab4-43b8-8e65-8de41043b74d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.685639 4620 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.685711 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lgkk\" (UniqueName: \"kubernetes.io/projected/aa662f18-6ab4-43b8-8e65-8de41043b74d-kube-api-access-2lgkk\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.685727 4620 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.685738 4620 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.685749 4620 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.685788 4620 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa662f18-6ab4-43b8-8e65-8de41043b74d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:11 crc kubenswrapper[4620]: I0129 15:15:11.685799 4620 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa662f18-6ab4-43b8-8e65-8de41043b74d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:12 crc kubenswrapper[4620]: I0129 15:15:12.135070 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-z57gf_aa662f18-6ab4-43b8-8e65-8de41043b74d/console/0.log" Jan 29 15:15:12 crc kubenswrapper[4620]: I0129 15:15:12.135122 4620 generic.go:334] "Generic (PLEG): container finished" podID="aa662f18-6ab4-43b8-8e65-8de41043b74d" containerID="2704f88e1799bacba75f2bfa870b35017328409f120ee96b0a38a88ef4ac79a7" exitCode=2 Jan 29 15:15:12 crc kubenswrapper[4620]: I0129 15:15:12.135152 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z57gf" event={"ID":"aa662f18-6ab4-43b8-8e65-8de41043b74d","Type":"ContainerDied","Data":"2704f88e1799bacba75f2bfa870b35017328409f120ee96b0a38a88ef4ac79a7"} Jan 29 15:15:12 crc kubenswrapper[4620]: I0129 15:15:12.135179 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z57gf" event={"ID":"aa662f18-6ab4-43b8-8e65-8de41043b74d","Type":"ContainerDied","Data":"dc4b855e65c23c1e3f912506129d8790e79d40b449d5ef637ba5e7458b64e57d"} Jan 29 15:15:12 crc kubenswrapper[4620]: I0129 15:15:12.135195 4620 scope.go:117] "RemoveContainer" containerID="2704f88e1799bacba75f2bfa870b35017328409f120ee96b0a38a88ef4ac79a7" Jan 29 15:15:12 crc kubenswrapper[4620]: I0129 15:15:12.135299 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z57gf" Jan 29 15:15:12 crc kubenswrapper[4620]: I0129 15:15:12.167707 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-z57gf"] Jan 29 15:15:12 crc kubenswrapper[4620]: I0129 15:15:12.172292 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-z57gf"] Jan 29 15:15:12 crc kubenswrapper[4620]: I0129 15:15:12.253608 4620 scope.go:117] "RemoveContainer" containerID="2704f88e1799bacba75f2bfa870b35017328409f120ee96b0a38a88ef4ac79a7" Jan 29 15:15:12 crc kubenswrapper[4620]: E0129 15:15:12.254002 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2704f88e1799bacba75f2bfa870b35017328409f120ee96b0a38a88ef4ac79a7\": container with ID starting with 2704f88e1799bacba75f2bfa870b35017328409f120ee96b0a38a88ef4ac79a7 not found: ID does not exist" containerID="2704f88e1799bacba75f2bfa870b35017328409f120ee96b0a38a88ef4ac79a7" Jan 29 15:15:12 crc kubenswrapper[4620]: I0129 15:15:12.254038 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2704f88e1799bacba75f2bfa870b35017328409f120ee96b0a38a88ef4ac79a7"} err="failed to get container status \"2704f88e1799bacba75f2bfa870b35017328409f120ee96b0a38a88ef4ac79a7\": rpc error: code = NotFound desc = could not find container \"2704f88e1799bacba75f2bfa870b35017328409f120ee96b0a38a88ef4ac79a7\": container with ID starting with 2704f88e1799bacba75f2bfa870b35017328409f120ee96b0a38a88ef4ac79a7 not found: ID does not exist" Jan 29 15:15:12 crc kubenswrapper[4620]: I0129 15:15:12.880799 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa662f18-6ab4-43b8-8e65-8de41043b74d" path="/var/lib/kubelet/pods/aa662f18-6ab4-43b8-8e65-8de41043b74d/volumes" Jan 29 15:15:14 crc kubenswrapper[4620]: I0129 15:15:14.159227 4620 generic.go:334] "Generic (PLEG): container finished" podID="c2b74701-552f-4983-9361-3889af3c6c25" containerID="5708920e8f793712806fda8e8785ce51de2b7b0e3cee2b05ad9cdf78842bbad8" exitCode=0 Jan 29 15:15:14 crc kubenswrapper[4620]: I0129 15:15:14.159298 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" event={"ID":"c2b74701-552f-4983-9361-3889af3c6c25","Type":"ContainerDied","Data":"5708920e8f793712806fda8e8785ce51de2b7b0e3cee2b05ad9cdf78842bbad8"} Jan 29 15:15:15 crc kubenswrapper[4620]: I0129 15:15:15.166984 4620 generic.go:334] "Generic (PLEG): container finished" podID="c2b74701-552f-4983-9361-3889af3c6c25" containerID="fe5783fca6248bf705d964330d97a96b878f1cdecb8fa2b98a7a166e24e1d9b2" exitCode=0 Jan 29 15:15:15 crc kubenswrapper[4620]: I0129 15:15:15.167081 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" event={"ID":"c2b74701-552f-4983-9361-3889af3c6c25","Type":"ContainerDied","Data":"fe5783fca6248bf705d964330d97a96b878f1cdecb8fa2b98a7a166e24e1d9b2"} Jan 29 15:15:16 crc kubenswrapper[4620]: I0129 15:15:16.413049 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" Jan 29 15:15:16 crc kubenswrapper[4620]: I0129 15:15:16.542377 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttt55\" (UniqueName: \"kubernetes.io/projected/c2b74701-552f-4983-9361-3889af3c6c25-kube-api-access-ttt55\") pod \"c2b74701-552f-4983-9361-3889af3c6c25\" (UID: \"c2b74701-552f-4983-9361-3889af3c6c25\") " Jan 29 15:15:16 crc kubenswrapper[4620]: I0129 15:15:16.542417 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2b74701-552f-4983-9361-3889af3c6c25-util\") pod \"c2b74701-552f-4983-9361-3889af3c6c25\" (UID: \"c2b74701-552f-4983-9361-3889af3c6c25\") " Jan 29 15:15:16 crc kubenswrapper[4620]: I0129 15:15:16.542515 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2b74701-552f-4983-9361-3889af3c6c25-bundle\") pod \"c2b74701-552f-4983-9361-3889af3c6c25\" (UID: \"c2b74701-552f-4983-9361-3889af3c6c25\") " Jan 29 15:15:16 crc kubenswrapper[4620]: I0129 15:15:16.543644 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b74701-552f-4983-9361-3889af3c6c25-bundle" (OuterVolumeSpecName: "bundle") pod "c2b74701-552f-4983-9361-3889af3c6c25" (UID: "c2b74701-552f-4983-9361-3889af3c6c25"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:15:16 crc kubenswrapper[4620]: I0129 15:15:16.547373 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2b74701-552f-4983-9361-3889af3c6c25-kube-api-access-ttt55" (OuterVolumeSpecName: "kube-api-access-ttt55") pod "c2b74701-552f-4983-9361-3889af3c6c25" (UID: "c2b74701-552f-4983-9361-3889af3c6c25"). InnerVolumeSpecName "kube-api-access-ttt55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:15:16 crc kubenswrapper[4620]: I0129 15:15:16.557519 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b74701-552f-4983-9361-3889af3c6c25-util" (OuterVolumeSpecName: "util") pod "c2b74701-552f-4983-9361-3889af3c6c25" (UID: "c2b74701-552f-4983-9361-3889af3c6c25"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:15:16 crc kubenswrapper[4620]: I0129 15:15:16.645084 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttt55\" (UniqueName: \"kubernetes.io/projected/c2b74701-552f-4983-9361-3889af3c6c25-kube-api-access-ttt55\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:16 crc kubenswrapper[4620]: I0129 15:15:16.645151 4620 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2b74701-552f-4983-9361-3889af3c6c25-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:16 crc kubenswrapper[4620]: I0129 15:15:16.645166 4620 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2b74701-552f-4983-9361-3889af3c6c25-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:17 crc kubenswrapper[4620]: I0129 15:15:17.190633 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" event={"ID":"c2b74701-552f-4983-9361-3889af3c6c25","Type":"ContainerDied","Data":"fdf83df8dd3d3bf388b11bda603b56e76ad37997d3f3f25f86fd52398816bd38"} Jan 29 15:15:17 crc kubenswrapper[4620]: I0129 15:15:17.190677 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdf83df8dd3d3bf388b11bda603b56e76ad37997d3f3f25f86fd52398816bd38" Jan 29 15:15:17 crc kubenswrapper[4620]: I0129 15:15:17.190723 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.234899 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq"] Jan 29 15:15:27 crc kubenswrapper[4620]: E0129 15:15:27.235654 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa662f18-6ab4-43b8-8e65-8de41043b74d" containerName="console" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.235669 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa662f18-6ab4-43b8-8e65-8de41043b74d" containerName="console" Jan 29 15:15:27 crc kubenswrapper[4620]: E0129 15:15:27.235681 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b74701-552f-4983-9361-3889af3c6c25" containerName="pull" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.235688 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b74701-552f-4983-9361-3889af3c6c25" containerName="pull" Jan 29 15:15:27 crc kubenswrapper[4620]: E0129 15:15:27.235699 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b74701-552f-4983-9361-3889af3c6c25" containerName="util" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.235705 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b74701-552f-4983-9361-3889af3c6c25" containerName="util" Jan 29 15:15:27 crc kubenswrapper[4620]: E0129 15:15:27.235717 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b74701-552f-4983-9361-3889af3c6c25" containerName="extract" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.235723 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b74701-552f-4983-9361-3889af3c6c25" containerName="extract" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.235842 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b74701-552f-4983-9361-3889af3c6c25" containerName="extract" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.235855 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa662f18-6ab4-43b8-8e65-8de41043b74d" containerName="console" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.236231 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.240563 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.240776 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.240905 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-npbhx" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.240796 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.241517 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.257375 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq"] Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.276179 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ba5af97e-3cd0-4492-8ecf-50a050ab7651-webhook-cert\") pod \"metallb-operator-controller-manager-88f9f7c47-mcqwq\" (UID: \"ba5af97e-3cd0-4492-8ecf-50a050ab7651\") " pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.276483 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2krh7\" (UniqueName: \"kubernetes.io/projected/ba5af97e-3cd0-4492-8ecf-50a050ab7651-kube-api-access-2krh7\") pod \"metallb-operator-controller-manager-88f9f7c47-mcqwq\" (UID: \"ba5af97e-3cd0-4492-8ecf-50a050ab7651\") " pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.276635 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ba5af97e-3cd0-4492-8ecf-50a050ab7651-apiservice-cert\") pod \"metallb-operator-controller-manager-88f9f7c47-mcqwq\" (UID: \"ba5af97e-3cd0-4492-8ecf-50a050ab7651\") " pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.377168 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ba5af97e-3cd0-4492-8ecf-50a050ab7651-apiservice-cert\") pod \"metallb-operator-controller-manager-88f9f7c47-mcqwq\" (UID: \"ba5af97e-3cd0-4492-8ecf-50a050ab7651\") " pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.377265 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ba5af97e-3cd0-4492-8ecf-50a050ab7651-webhook-cert\") pod \"metallb-operator-controller-manager-88f9f7c47-mcqwq\" (UID: \"ba5af97e-3cd0-4492-8ecf-50a050ab7651\") " pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.377297 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2krh7\" (UniqueName: \"kubernetes.io/projected/ba5af97e-3cd0-4492-8ecf-50a050ab7651-kube-api-access-2krh7\") pod \"metallb-operator-controller-manager-88f9f7c47-mcqwq\" (UID: \"ba5af97e-3cd0-4492-8ecf-50a050ab7651\") " pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.382638 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ba5af97e-3cd0-4492-8ecf-50a050ab7651-apiservice-cert\") pod \"metallb-operator-controller-manager-88f9f7c47-mcqwq\" (UID: \"ba5af97e-3cd0-4492-8ecf-50a050ab7651\") " pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.382667 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ba5af97e-3cd0-4492-8ecf-50a050ab7651-webhook-cert\") pod \"metallb-operator-controller-manager-88f9f7c47-mcqwq\" (UID: \"ba5af97e-3cd0-4492-8ecf-50a050ab7651\") " pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.404432 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2krh7\" (UniqueName: \"kubernetes.io/projected/ba5af97e-3cd0-4492-8ecf-50a050ab7651-kube-api-access-2krh7\") pod \"metallb-operator-controller-manager-88f9f7c47-mcqwq\" (UID: \"ba5af97e-3cd0-4492-8ecf-50a050ab7651\") " pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.555688 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.582124 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p"] Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.583625 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.585701 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.586286 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-p9m4w" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.586451 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.612020 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p"] Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.792363 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h8kq\" (UniqueName: \"kubernetes.io/projected/47675cc2-2da6-48dd-8d00-bf9de5b826d9-kube-api-access-5h8kq\") pod \"metallb-operator-webhook-server-777c554c66-rxv9p\" (UID: \"47675cc2-2da6-48dd-8d00-bf9de5b826d9\") " pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.792696 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47675cc2-2da6-48dd-8d00-bf9de5b826d9-webhook-cert\") pod \"metallb-operator-webhook-server-777c554c66-rxv9p\" (UID: \"47675cc2-2da6-48dd-8d00-bf9de5b826d9\") " pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.792740 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47675cc2-2da6-48dd-8d00-bf9de5b826d9-apiservice-cert\") pod \"metallb-operator-webhook-server-777c554c66-rxv9p\" (UID: \"47675cc2-2da6-48dd-8d00-bf9de5b826d9\") " pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.891958 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq"] Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.893562 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47675cc2-2da6-48dd-8d00-bf9de5b826d9-webhook-cert\") pod \"metallb-operator-webhook-server-777c554c66-rxv9p\" (UID: \"47675cc2-2da6-48dd-8d00-bf9de5b826d9\") " pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.893623 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47675cc2-2da6-48dd-8d00-bf9de5b826d9-apiservice-cert\") pod \"metallb-operator-webhook-server-777c554c66-rxv9p\" (UID: \"47675cc2-2da6-48dd-8d00-bf9de5b826d9\") " pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.893673 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h8kq\" (UniqueName: \"kubernetes.io/projected/47675cc2-2da6-48dd-8d00-bf9de5b826d9-kube-api-access-5h8kq\") pod \"metallb-operator-webhook-server-777c554c66-rxv9p\" (UID: \"47675cc2-2da6-48dd-8d00-bf9de5b826d9\") " pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.900417 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47675cc2-2da6-48dd-8d00-bf9de5b826d9-apiservice-cert\") pod \"metallb-operator-webhook-server-777c554c66-rxv9p\" (UID: \"47675cc2-2da6-48dd-8d00-bf9de5b826d9\") " pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.901403 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47675cc2-2da6-48dd-8d00-bf9de5b826d9-webhook-cert\") pod \"metallb-operator-webhook-server-777c554c66-rxv9p\" (UID: \"47675cc2-2da6-48dd-8d00-bf9de5b826d9\") " pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.921219 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h8kq\" (UniqueName: \"kubernetes.io/projected/47675cc2-2da6-48dd-8d00-bf9de5b826d9-kube-api-access-5h8kq\") pod \"metallb-operator-webhook-server-777c554c66-rxv9p\" (UID: \"47675cc2-2da6-48dd-8d00-bf9de5b826d9\") " pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" Jan 29 15:15:27 crc kubenswrapper[4620]: I0129 15:15:27.922561 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" Jan 29 15:15:28 crc kubenswrapper[4620]: I0129 15:15:28.149636 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p"] Jan 29 15:15:28 crc kubenswrapper[4620]: W0129 15:15:28.156370 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47675cc2_2da6_48dd_8d00_bf9de5b826d9.slice/crio-c5db1121c8b749eaf5921a441d66a49008802a8f89ace76fe83ba3178289a4fa WatchSource:0}: Error finding container c5db1121c8b749eaf5921a441d66a49008802a8f89ace76fe83ba3178289a4fa: Status 404 returned error can't find the container with id c5db1121c8b749eaf5921a441d66a49008802a8f89ace76fe83ba3178289a4fa Jan 29 15:15:28 crc kubenswrapper[4620]: I0129 15:15:28.245706 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" event={"ID":"47675cc2-2da6-48dd-8d00-bf9de5b826d9","Type":"ContainerStarted","Data":"c5db1121c8b749eaf5921a441d66a49008802a8f89ace76fe83ba3178289a4fa"} Jan 29 15:15:28 crc kubenswrapper[4620]: I0129 15:15:28.246731 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" event={"ID":"ba5af97e-3cd0-4492-8ecf-50a050ab7651","Type":"ContainerStarted","Data":"248d110b0df2d750f4e680aa34fe41c061452255b8e0624a5fdda6d60d15954f"} Jan 29 15:15:35 crc kubenswrapper[4620]: I0129 15:15:35.323315 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" event={"ID":"ba5af97e-3cd0-4492-8ecf-50a050ab7651","Type":"ContainerStarted","Data":"886f11e616e278013c5c553a0799b6d4033cc6d38be28de936fb47397d5caea3"} Jan 29 15:15:35 crc kubenswrapper[4620]: I0129 15:15:35.324871 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" Jan 29 15:15:35 crc kubenswrapper[4620]: I0129 15:15:35.326258 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" event={"ID":"47675cc2-2da6-48dd-8d00-bf9de5b826d9","Type":"ContainerStarted","Data":"14e91b5a3943669ebf8956e282ed2b2b5e572a0ae535a5f89a2c02779406601f"} Jan 29 15:15:35 crc kubenswrapper[4620]: I0129 15:15:35.326674 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" Jan 29 15:15:35 crc kubenswrapper[4620]: I0129 15:15:35.346034 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" podStartSLOduration=1.310489929 podStartE2EDuration="8.346014683s" podCreationTimestamp="2026-01-29 15:15:27 +0000 UTC" firstStartedPulling="2026-01-29 15:15:27.905848813 +0000 UTC m=+868.518676458" lastFinishedPulling="2026-01-29 15:15:34.941373567 +0000 UTC m=+875.554201212" observedRunningTime="2026-01-29 15:15:35.343183889 +0000 UTC m=+875.956011544" watchObservedRunningTime="2026-01-29 15:15:35.346014683 +0000 UTC m=+875.958842328" Jan 29 15:15:35 crc kubenswrapper[4620]: I0129 15:15:35.377381 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" podStartSLOduration=1.571217407 podStartE2EDuration="8.377366032s" podCreationTimestamp="2026-01-29 15:15:27 +0000 UTC" firstStartedPulling="2026-01-29 15:15:28.159884381 +0000 UTC m=+868.772712026" lastFinishedPulling="2026-01-29 15:15:34.966033006 +0000 UTC m=+875.578860651" observedRunningTime="2026-01-29 15:15:35.374764344 +0000 UTC m=+875.987591989" watchObservedRunningTime="2026-01-29 15:15:35.377366032 +0000 UTC m=+875.990193677" Jan 29 15:15:47 crc kubenswrapper[4620]: I0129 15:15:47.926896 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-777c554c66-rxv9p" Jan 29 15:16:07 crc kubenswrapper[4620]: I0129 15:16:07.558005 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-88f9f7c47-mcqwq" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.287156 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-h6cmz"] Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.289246 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.303205 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.303589 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gcmxt" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.309422 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.327828 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e6981bff-0689-47b1-96f6-f265340449f9-frr-conf\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.327911 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7md7d\" (UniqueName: \"kubernetes.io/projected/e6981bff-0689-47b1-96f6-f265340449f9-kube-api-access-7md7d\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.327975 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e6981bff-0689-47b1-96f6-f265340449f9-frr-startup\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.328010 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e6981bff-0689-47b1-96f6-f265340449f9-frr-sockets\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.328046 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e6981bff-0689-47b1-96f6-f265340449f9-metrics\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.328081 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e6981bff-0689-47b1-96f6-f265340449f9-reloader\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.328121 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6981bff-0689-47b1-96f6-f265340449f9-metrics-certs\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.362610 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt"] Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.363865 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.367541 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.413490 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt"] Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.431638 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e6981bff-0689-47b1-96f6-f265340449f9-frr-sockets\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.431717 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e6981bff-0689-47b1-96f6-f265340449f9-metrics\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.431750 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e6981bff-0689-47b1-96f6-f265340449f9-reloader\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.431804 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6981bff-0689-47b1-96f6-f265340449f9-metrics-certs\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.431850 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e6981bff-0689-47b1-96f6-f265340449f9-frr-conf\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.431887 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7md7d\" (UniqueName: \"kubernetes.io/projected/e6981bff-0689-47b1-96f6-f265340449f9-kube-api-access-7md7d\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.431934 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e6981bff-0689-47b1-96f6-f265340449f9-frr-startup\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.433505 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e6981bff-0689-47b1-96f6-f265340449f9-frr-startup\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.434387 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e6981bff-0689-47b1-96f6-f265340449f9-frr-sockets\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.434628 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e6981bff-0689-47b1-96f6-f265340449f9-metrics\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: E0129 15:16:08.434979 4620 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 29 15:16:08 crc kubenswrapper[4620]: E0129 15:16:08.435080 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6981bff-0689-47b1-96f6-f265340449f9-metrics-certs podName:e6981bff-0689-47b1-96f6-f265340449f9 nodeName:}" failed. No retries permitted until 2026-01-29 15:16:08.935057607 +0000 UTC m=+909.547885302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6981bff-0689-47b1-96f6-f265340449f9-metrics-certs") pod "frr-k8s-h6cmz" (UID: "e6981bff-0689-47b1-96f6-f265340449f9") : secret "frr-k8s-certs-secret" not found Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.435011 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e6981bff-0689-47b1-96f6-f265340449f9-frr-conf\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.435536 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e6981bff-0689-47b1-96f6-f265340449f9-reloader\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.460255 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8n22g"] Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.461105 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8n22g" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.468194 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.468430 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-tndnn" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.469188 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.472553 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-jvtb9"] Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.473031 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.473920 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-jvtb9" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.476674 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.499320 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7md7d\" (UniqueName: \"kubernetes.io/projected/e6981bff-0689-47b1-96f6-f265340449f9-kube-api-access-7md7d\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.525825 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-jvtb9"] Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.533019 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd14327c-f27e-4e0b-9955-a9cc76e1c253-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-klnxt\" (UID: \"fd14327c-f27e-4e0b-9955-a9cc76e1c253\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.533089 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pjjz\" (UniqueName: \"kubernetes.io/projected/fd14327c-f27e-4e0b-9955-a9cc76e1c253-kube-api-access-8pjjz\") pod \"frr-k8s-webhook-server-7df86c4f6c-klnxt\" (UID: \"fd14327c-f27e-4e0b-9955-a9cc76e1c253\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.634515 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b3b9527-5117-4a99-9859-dc287cefd3bf-cert\") pod \"controller-6968d8fdc4-jvtb9\" (UID: \"3b3b9527-5117-4a99-9859-dc287cefd3bf\") " pod="metallb-system/controller-6968d8fdc4-jvtb9" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.634586 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-metrics-certs\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.634607 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ed16c687-3d63-4256-8715-b5654d4760c9-metallb-excludel2\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.634749 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4kbv\" (UniqueName: \"kubernetes.io/projected/3b3b9527-5117-4a99-9859-dc287cefd3bf-kube-api-access-k4kbv\") pod \"controller-6968d8fdc4-jvtb9\" (UID: \"3b3b9527-5117-4a99-9859-dc287cefd3bf\") " pod="metallb-system/controller-6968d8fdc4-jvtb9" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.634828 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q25th\" (UniqueName: \"kubernetes.io/projected/ed16c687-3d63-4256-8715-b5654d4760c9-kube-api-access-q25th\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.634923 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd14327c-f27e-4e0b-9955-a9cc76e1c253-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-klnxt\" (UID: \"fd14327c-f27e-4e0b-9955-a9cc76e1c253\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.634976 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-memberlist\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.635027 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pjjz\" (UniqueName: \"kubernetes.io/projected/fd14327c-f27e-4e0b-9955-a9cc76e1c253-kube-api-access-8pjjz\") pod \"frr-k8s-webhook-server-7df86c4f6c-klnxt\" (UID: \"fd14327c-f27e-4e0b-9955-a9cc76e1c253\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.635089 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3b9527-5117-4a99-9859-dc287cefd3bf-metrics-certs\") pod \"controller-6968d8fdc4-jvtb9\" (UID: \"3b3b9527-5117-4a99-9859-dc287cefd3bf\") " pod="metallb-system/controller-6968d8fdc4-jvtb9" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.648153 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd14327c-f27e-4e0b-9955-a9cc76e1c253-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-klnxt\" (UID: \"fd14327c-f27e-4e0b-9955-a9cc76e1c253\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.666409 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pjjz\" (UniqueName: \"kubernetes.io/projected/fd14327c-f27e-4e0b-9955-a9cc76e1c253-kube-api-access-8pjjz\") pod \"frr-k8s-webhook-server-7df86c4f6c-klnxt\" (UID: \"fd14327c-f27e-4e0b-9955-a9cc76e1c253\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.719441 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.737195 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-metrics-certs\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.737248 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ed16c687-3d63-4256-8715-b5654d4760c9-metallb-excludel2\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.737296 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4kbv\" (UniqueName: \"kubernetes.io/projected/3b3b9527-5117-4a99-9859-dc287cefd3bf-kube-api-access-k4kbv\") pod \"controller-6968d8fdc4-jvtb9\" (UID: \"3b3b9527-5117-4a99-9859-dc287cefd3bf\") " pod="metallb-system/controller-6968d8fdc4-jvtb9" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.737317 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q25th\" (UniqueName: \"kubernetes.io/projected/ed16c687-3d63-4256-8715-b5654d4760c9-kube-api-access-q25th\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.737372 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-memberlist\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.737408 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3b9527-5117-4a99-9859-dc287cefd3bf-metrics-certs\") pod \"controller-6968d8fdc4-jvtb9\" (UID: \"3b3b9527-5117-4a99-9859-dc287cefd3bf\") " pod="metallb-system/controller-6968d8fdc4-jvtb9" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.737425 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b3b9527-5117-4a99-9859-dc287cefd3bf-cert\") pod \"controller-6968d8fdc4-jvtb9\" (UID: \"3b3b9527-5117-4a99-9859-dc287cefd3bf\") " pod="metallb-system/controller-6968d8fdc4-jvtb9" Jan 29 15:16:08 crc kubenswrapper[4620]: E0129 15:16:08.737633 4620 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 15:16:08 crc kubenswrapper[4620]: E0129 15:16:08.737718 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-metrics-certs podName:ed16c687-3d63-4256-8715-b5654d4760c9 nodeName:}" failed. No retries permitted until 2026-01-29 15:16:09.237681517 +0000 UTC m=+909.850509162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-metrics-certs") pod "speaker-8n22g" (UID: "ed16c687-3d63-4256-8715-b5654d4760c9") : secret "speaker-certs-secret" not found Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.738705 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ed16c687-3d63-4256-8715-b5654d4760c9-metallb-excludel2\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:08 crc kubenswrapper[4620]: E0129 15:16:08.739151 4620 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 15:16:08 crc kubenswrapper[4620]: E0129 15:16:08.739196 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-memberlist podName:ed16c687-3d63-4256-8715-b5654d4760c9 nodeName:}" failed. No retries permitted until 2026-01-29 15:16:09.239183814 +0000 UTC m=+909.852011449 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-memberlist") pod "speaker-8n22g" (UID: "ed16c687-3d63-4256-8715-b5654d4760c9") : secret "metallb-memberlist" not found Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.744720 4620 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.745857 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3b3b9527-5117-4a99-9859-dc287cefd3bf-metrics-certs\") pod \"controller-6968d8fdc4-jvtb9\" (UID: \"3b3b9527-5117-4a99-9859-dc287cefd3bf\") " pod="metallb-system/controller-6968d8fdc4-jvtb9" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.752223 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b3b9527-5117-4a99-9859-dc287cefd3bf-cert\") pod \"controller-6968d8fdc4-jvtb9\" (UID: \"3b3b9527-5117-4a99-9859-dc287cefd3bf\") " pod="metallb-system/controller-6968d8fdc4-jvtb9" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.762621 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4kbv\" (UniqueName: \"kubernetes.io/projected/3b3b9527-5117-4a99-9859-dc287cefd3bf-kube-api-access-k4kbv\") pod \"controller-6968d8fdc4-jvtb9\" (UID: \"3b3b9527-5117-4a99-9859-dc287cefd3bf\") " pod="metallb-system/controller-6968d8fdc4-jvtb9" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.800644 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q25th\" (UniqueName: \"kubernetes.io/projected/ed16c687-3d63-4256-8715-b5654d4760c9-kube-api-access-q25th\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.876728 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-jvtb9" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.977379 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6981bff-0689-47b1-96f6-f265340449f9-metrics-certs\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:08 crc kubenswrapper[4620]: I0129 15:16:08.985634 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6981bff-0689-47b1-96f6-f265340449f9-metrics-certs\") pod \"frr-k8s-h6cmz\" (UID: \"e6981bff-0689-47b1-96f6-f265340449f9\") " pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.047023 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt"] Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.125641 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-jvtb9"] Jan 29 15:16:09 crc kubenswrapper[4620]: W0129 15:16:09.134788 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b3b9527_5117_4a99_9859_dc287cefd3bf.slice/crio-fd1e19f9cef3a580d447ff6e99cff76f2b759b912a3c2e06e77eb64c2ae2dcd7 WatchSource:0}: Error finding container fd1e19f9cef3a580d447ff6e99cff76f2b759b912a3c2e06e77eb64c2ae2dcd7: Status 404 returned error can't find the container with id fd1e19f9cef3a580d447ff6e99cff76f2b759b912a3c2e06e77eb64c2ae2dcd7 Jan 29 15:16:09 crc kubenswrapper[4620]: E0129 15:16:09.196474 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 29 15:16:09 crc kubenswrapper[4620]: E0129 15:16:09.196708 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:frr-k8s-webhook-server,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/frr-k8s],Args:[--log-level=debug --webhook-mode=onlywebhook --disable-cert-rotation=true --namespace=$(NAMESPACE) --metrics-bind-address=:7572],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:monitoring,HostPort:0,ContainerPort:7572,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pjjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000700000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-webhook-server-7df86c4f6c-klnxt_metallb-system(fd14327c-f27e-4e0b-9955-a9cc76e1c253): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:16:09 crc kubenswrapper[4620]: E0129 15:16:09.199195 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.213376 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.288411 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-metrics-certs\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.288752 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-memberlist\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:09 crc kubenswrapper[4620]: E0129 15:16:09.288919 4620 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 15:16:09 crc kubenswrapper[4620]: E0129 15:16:09.288968 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-memberlist podName:ed16c687-3d63-4256-8715-b5654d4760c9 nodeName:}" failed. No retries permitted until 2026-01-29 15:16:10.288954003 +0000 UTC m=+910.901781648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-memberlist") pod "speaker-8n22g" (UID: "ed16c687-3d63-4256-8715-b5654d4760c9") : secret "metallb-memberlist" not found Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.292295 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-metrics-certs\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:09 crc kubenswrapper[4620]: E0129 15:16:09.483614 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 29 15:16:09 crc kubenswrapper[4620]: E0129 15:16:09.483822 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:cp-frr-files,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/bin/sh -c cp -rLf /tmp/frr/* /etc/frr/],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:frr-startup,ReadOnly:false,MountPath:/tmp/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:frr-conf,ReadOnly:false,MountPath:/etc/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7md7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*100,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*101,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-h6cmz_metallb-system(e6981bff-0689-47b1-96f6-f265340449f9): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:16:09 crc kubenswrapper[4620]: E0129 15:16:09.485259 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.521873 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h6cmz" event={"ID":"e6981bff-0689-47b1-96f6-f265340449f9","Type":"ContainerStarted","Data":"e81903b3b0426973e8d390bb5c673a05334a871a42bbb7a7b20f8a73979e961e"} Jan 29 15:16:09 crc kubenswrapper[4620]: E0129 15:16:09.523401 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.525891 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-jvtb9" event={"ID":"3b3b9527-5117-4a99-9859-dc287cefd3bf","Type":"ContainerStarted","Data":"3cab4974e2961ac73abe90fa44e0971f5605e4fccd91e50415b294114a307c9a"} Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.525940 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-jvtb9" event={"ID":"3b3b9527-5117-4a99-9859-dc287cefd3bf","Type":"ContainerStarted","Data":"08dc251df6d2ed5b3f0e41164d6e9e54f606dc0a9959ab21be17cab621dc06b6"} Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.525975 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-jvtb9" event={"ID":"3b3b9527-5117-4a99-9859-dc287cefd3bf","Type":"ContainerStarted","Data":"fd1e19f9cef3a580d447ff6e99cff76f2b759b912a3c2e06e77eb64c2ae2dcd7"} Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.526089 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-jvtb9" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.527724 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" event={"ID":"fd14327c-f27e-4e0b-9955-a9cc76e1c253","Type":"ContainerStarted","Data":"fe67dc94488b0342a82598b59c095b1211f30aa31fda1bc23e6dbba594a2e640"} Jan 29 15:16:09 crc kubenswrapper[4620]: E0129 15:16:09.529644 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.609871 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ql7jk"] Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.611980 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.618192 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ql7jk"] Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.715885 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-jvtb9" podStartSLOduration=1.7158683350000001 podStartE2EDuration="1.715868335s" podCreationTimestamp="2026-01-29 15:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:16:09.71568761 +0000 UTC m=+910.328515255" watchObservedRunningTime="2026-01-29 15:16:09.715868335 +0000 UTC m=+910.328695980" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.795629 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrq6w\" (UniqueName: \"kubernetes.io/projected/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-kube-api-access-mrq6w\") pod \"community-operators-ql7jk\" (UID: \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\") " pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.795912 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-utilities\") pod \"community-operators-ql7jk\" (UID: \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\") " pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.795997 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-catalog-content\") pod \"community-operators-ql7jk\" (UID: \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\") " pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.897582 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrq6w\" (UniqueName: \"kubernetes.io/projected/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-kube-api-access-mrq6w\") pod \"community-operators-ql7jk\" (UID: \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\") " pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.897684 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-utilities\") pod \"community-operators-ql7jk\" (UID: \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\") " pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.897709 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-catalog-content\") pod \"community-operators-ql7jk\" (UID: \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\") " pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.898196 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-catalog-content\") pod \"community-operators-ql7jk\" (UID: \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\") " pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.898774 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-utilities\") pod \"community-operators-ql7jk\" (UID: \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\") " pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:16:09 crc kubenswrapper[4620]: I0129 15:16:09.927984 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrq6w\" (UniqueName: \"kubernetes.io/projected/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-kube-api-access-mrq6w\") pod \"community-operators-ql7jk\" (UID: \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\") " pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:16:10 crc kubenswrapper[4620]: I0129 15:16:10.226272 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:16:10 crc kubenswrapper[4620]: I0129 15:16:10.303520 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-memberlist\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:10 crc kubenswrapper[4620]: I0129 15:16:10.323836 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ed16c687-3d63-4256-8715-b5654d4760c9-memberlist\") pod \"speaker-8n22g\" (UID: \"ed16c687-3d63-4256-8715-b5654d4760c9\") " pod="metallb-system/speaker-8n22g" Jan 29 15:16:10 crc kubenswrapper[4620]: E0129 15:16:10.553895 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:16:10 crc kubenswrapper[4620]: I0129 15:16:10.604008 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8n22g" Jan 29 15:16:10 crc kubenswrapper[4620]: I0129 15:16:10.726439 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ql7jk"] Jan 29 15:16:11 crc kubenswrapper[4620]: I0129 15:16:11.557112 4620 generic.go:334] "Generic (PLEG): container finished" podID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" containerID="7559af10a8a7f88ac80b6f008328ef85d6db65edffd67f084083584b7543af20" exitCode=0 Jan 29 15:16:11 crc kubenswrapper[4620]: I0129 15:16:11.557306 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql7jk" event={"ID":"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50","Type":"ContainerDied","Data":"7559af10a8a7f88ac80b6f008328ef85d6db65edffd67f084083584b7543af20"} Jan 29 15:16:11 crc kubenswrapper[4620]: I0129 15:16:11.557436 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql7jk" event={"ID":"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50","Type":"ContainerStarted","Data":"a358f901415effc99790ea758b3319d125d3aafb853df83716d135fc628c37f7"} Jan 29 15:16:11 crc kubenswrapper[4620]: I0129 15:16:11.559836 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8n22g" event={"ID":"ed16c687-3d63-4256-8715-b5654d4760c9","Type":"ContainerStarted","Data":"3658c1095daa0d51be650487016a201e17eff4a19a3f9c93ac74666822c740a3"} Jan 29 15:16:11 crc kubenswrapper[4620]: I0129 15:16:11.559863 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8n22g" event={"ID":"ed16c687-3d63-4256-8715-b5654d4760c9","Type":"ContainerStarted","Data":"b77ff0972a1d07ad508f6209156b370efb38efc92d9f36819db50b2cfebb6a94"} Jan 29 15:16:11 crc kubenswrapper[4620]: I0129 15:16:11.559877 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8n22g" event={"ID":"ed16c687-3d63-4256-8715-b5654d4760c9","Type":"ContainerStarted","Data":"0c395ec201c1337303a346c621acce76dcc800a65b3438c6554be4bf8718c459"} Jan 29 15:16:11 crc kubenswrapper[4620]: I0129 15:16:11.560045 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8n22g" Jan 29 15:16:11 crc kubenswrapper[4620]: E0129 15:16:11.676513 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:16:11 crc kubenswrapper[4620]: E0129 15:16:11.676663 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrq6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ql7jk_openshift-marketplace(4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:16:11 crc kubenswrapper[4620]: E0129 15:16:11.677980 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:16:12 crc kubenswrapper[4620]: E0129 15:16:12.568996 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:16:12 crc kubenswrapper[4620]: I0129 15:16:12.591657 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8n22g" podStartSLOduration=4.591637316 podStartE2EDuration="4.591637316s" podCreationTimestamp="2026-01-29 15:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:16:11.623577211 +0000 UTC m=+912.236404856" watchObservedRunningTime="2026-01-29 15:16:12.591637316 +0000 UTC m=+913.204464961" Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.281522 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sg6jb"] Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.283082 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.342618 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg6jb"] Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.351398 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-utilities\") pod \"redhat-marketplace-sg6jb\" (UID: \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\") " pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.351461 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-catalog-content\") pod \"redhat-marketplace-sg6jb\" (UID: \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\") " pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.351496 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvbcx\" (UniqueName: \"kubernetes.io/projected/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-kube-api-access-fvbcx\") pod \"redhat-marketplace-sg6jb\" (UID: \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\") " pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.452940 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-utilities\") pod \"redhat-marketplace-sg6jb\" (UID: \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\") " pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.453015 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-catalog-content\") pod \"redhat-marketplace-sg6jb\" (UID: \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\") " pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.453051 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvbcx\" (UniqueName: \"kubernetes.io/projected/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-kube-api-access-fvbcx\") pod \"redhat-marketplace-sg6jb\" (UID: \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\") " pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.453814 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-utilities\") pod \"redhat-marketplace-sg6jb\" (UID: \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\") " pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.453830 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-catalog-content\") pod \"redhat-marketplace-sg6jb\" (UID: \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\") " pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.477114 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvbcx\" (UniqueName: \"kubernetes.io/projected/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-kube-api-access-fvbcx\") pod \"redhat-marketplace-sg6jb\" (UID: \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\") " pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.607825 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:16:18 crc kubenswrapper[4620]: I0129 15:16:18.827901 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg6jb"] Jan 29 15:16:19 crc kubenswrapper[4620]: I0129 15:16:19.611883 4620 generic.go:334] "Generic (PLEG): container finished" podID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" containerID="d6245e39e18ff5886de81f46e5371900d12e527343ee1736559406533f7f8f32" exitCode=0 Jan 29 15:16:19 crc kubenswrapper[4620]: I0129 15:16:19.611939 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg6jb" event={"ID":"83c6763d-3a90-4ded-b50c-57eb36ad1c0d","Type":"ContainerDied","Data":"d6245e39e18ff5886de81f46e5371900d12e527343ee1736559406533f7f8f32"} Jan 29 15:16:19 crc kubenswrapper[4620]: I0129 15:16:19.613650 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg6jb" event={"ID":"83c6763d-3a90-4ded-b50c-57eb36ad1c0d","Type":"ContainerStarted","Data":"d088e44d55acb68c349805eda31938c8e3cfbb23e60ab3262ffd6d6e5b7535e7"} Jan 29 15:16:19 crc kubenswrapper[4620]: E0129 15:16:19.734149 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:16:19 crc kubenswrapper[4620]: E0129 15:16:19.734313 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvbcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sg6jb_openshift-marketplace(83c6763d-3a90-4ded-b50c-57eb36ad1c0d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:16:19 crc kubenswrapper[4620]: E0129 15:16:19.735498 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:16:20 crc kubenswrapper[4620]: E0129 15:16:20.622050 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:16:21 crc kubenswrapper[4620]: E0129 15:16:21.001121 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 29 15:16:21 crc kubenswrapper[4620]: E0129 15:16:21.001267 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:cp-frr-files,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/bin/sh -c cp -rLf /tmp/frr/* /etc/frr/],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:frr-startup,ReadOnly:false,MountPath:/tmp/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:frr-conf,ReadOnly:false,MountPath:/etc/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7md7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*100,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*101,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-h6cmz_metallb-system(e6981bff-0689-47b1-96f6-f265340449f9): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:16:21 crc kubenswrapper[4620]: E0129 15:16:21.002479 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:16:21 crc kubenswrapper[4620]: E0129 15:16:21.994790 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 29 15:16:21 crc kubenswrapper[4620]: E0129 15:16:21.995041 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:frr-k8s-webhook-server,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/frr-k8s],Args:[--log-level=debug --webhook-mode=onlywebhook --disable-cert-rotation=true --namespace=$(NAMESPACE) --metrics-bind-address=:7572],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:monitoring,HostPort:0,ContainerPort:7572,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pjjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000700000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-webhook-server-7df86c4f6c-klnxt_metallb-system(fd14327c-f27e-4e0b-9955-a9cc76e1c253): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:16:21 crc kubenswrapper[4620]: E0129 15:16:21.996885 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:16:26 crc kubenswrapper[4620]: E0129 15:16:26.004381 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:16:26 crc kubenswrapper[4620]: E0129 15:16:26.005652 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrq6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ql7jk_openshift-marketplace(4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:16:26 crc kubenswrapper[4620]: E0129 15:16:26.006971 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:16:28 crc kubenswrapper[4620]: I0129 15:16:28.879858 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-jvtb9" Jan 29 15:16:30 crc kubenswrapper[4620]: I0129 15:16:30.610222 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8n22g" Jan 29 15:16:33 crc kubenswrapper[4620]: I0129 15:16:33.371841 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-llhnw"] Jan 29 15:16:33 crc kubenswrapper[4620]: I0129 15:16:33.375129 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-llhnw" Jan 29 15:16:33 crc kubenswrapper[4620]: I0129 15:16:33.379164 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-flhjw" Jan 29 15:16:33 crc kubenswrapper[4620]: I0129 15:16:33.380208 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 15:16:33 crc kubenswrapper[4620]: I0129 15:16:33.380771 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 15:16:33 crc kubenswrapper[4620]: I0129 15:16:33.389429 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-llhnw"] Jan 29 15:16:33 crc kubenswrapper[4620]: I0129 15:16:33.547023 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlkdv\" (UniqueName: \"kubernetes.io/projected/1f6f93a5-1485-481e-b042-550498db3e2e-kube-api-access-nlkdv\") pod \"openstack-operator-index-llhnw\" (UID: \"1f6f93a5-1485-481e-b042-550498db3e2e\") " pod="openstack-operators/openstack-operator-index-llhnw" Jan 29 15:16:33 crc kubenswrapper[4620]: I0129 15:16:33.648808 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlkdv\" (UniqueName: \"kubernetes.io/projected/1f6f93a5-1485-481e-b042-550498db3e2e-kube-api-access-nlkdv\") pod \"openstack-operator-index-llhnw\" (UID: \"1f6f93a5-1485-481e-b042-550498db3e2e\") " pod="openstack-operators/openstack-operator-index-llhnw" Jan 29 15:16:33 crc kubenswrapper[4620]: I0129 15:16:33.686680 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlkdv\" (UniqueName: \"kubernetes.io/projected/1f6f93a5-1485-481e-b042-550498db3e2e-kube-api-access-nlkdv\") pod \"openstack-operator-index-llhnw\" (UID: \"1f6f93a5-1485-481e-b042-550498db3e2e\") " pod="openstack-operators/openstack-operator-index-llhnw" Jan 29 15:16:33 crc kubenswrapper[4620]: I0129 15:16:33.699241 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-llhnw" Jan 29 15:16:34 crc kubenswrapper[4620]: E0129 15:16:34.043892 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:16:34 crc kubenswrapper[4620]: E0129 15:16:34.044121 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvbcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sg6jb_openshift-marketplace(83c6763d-3a90-4ded-b50c-57eb36ad1c0d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:16:34 crc kubenswrapper[4620]: E0129 15:16:34.046117 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:16:34 crc kubenswrapper[4620]: I0129 15:16:34.111046 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:16:34 crc kubenswrapper[4620]: I0129 15:16:34.111145 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:16:34 crc kubenswrapper[4620]: I0129 15:16:34.153872 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-llhnw"] Jan 29 15:16:34 crc kubenswrapper[4620]: I0129 15:16:34.705392 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-llhnw" event={"ID":"1f6f93a5-1485-481e-b042-550498db3e2e","Type":"ContainerStarted","Data":"262fabcc792f95a67a3828cd3d3ca328d889c6f914b9e5264ec2e26871042dd0"} Jan 29 15:16:35 crc kubenswrapper[4620]: E0129 15:16:35.876926 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:16:36 crc kubenswrapper[4620]: E0129 15:16:36.874330 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:16:37 crc kubenswrapper[4620]: I0129 15:16:37.527348 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-llhnw"] Jan 29 15:16:38 crc kubenswrapper[4620]: I0129 15:16:38.352144 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7dssf"] Jan 29 15:16:38 crc kubenswrapper[4620]: I0129 15:16:38.353717 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7dssf" Jan 29 15:16:38 crc kubenswrapper[4620]: I0129 15:16:38.386519 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7dssf"] Jan 29 15:16:38 crc kubenswrapper[4620]: I0129 15:16:38.416742 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2t99\" (UniqueName: \"kubernetes.io/projected/7466ed16-1885-44d8-b340-7e07fb51e497-kube-api-access-m2t99\") pod \"openstack-operator-index-7dssf\" (UID: \"7466ed16-1885-44d8-b340-7e07fb51e497\") " pod="openstack-operators/openstack-operator-index-7dssf" Jan 29 15:16:38 crc kubenswrapper[4620]: I0129 15:16:38.517162 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2t99\" (UniqueName: \"kubernetes.io/projected/7466ed16-1885-44d8-b340-7e07fb51e497-kube-api-access-m2t99\") pod \"openstack-operator-index-7dssf\" (UID: \"7466ed16-1885-44d8-b340-7e07fb51e497\") " pod="openstack-operators/openstack-operator-index-7dssf" Jan 29 15:16:38 crc kubenswrapper[4620]: I0129 15:16:38.541863 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2t99\" (UniqueName: \"kubernetes.io/projected/7466ed16-1885-44d8-b340-7e07fb51e497-kube-api-access-m2t99\") pod \"openstack-operator-index-7dssf\" (UID: \"7466ed16-1885-44d8-b340-7e07fb51e497\") " pod="openstack-operators/openstack-operator-index-7dssf" Jan 29 15:16:38 crc kubenswrapper[4620]: I0129 15:16:38.692737 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7dssf" Jan 29 15:16:38 crc kubenswrapper[4620]: E0129 15:16:38.971507 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:16:40 crc kubenswrapper[4620]: I0129 15:16:40.743254 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-llhnw" event={"ID":"1f6f93a5-1485-481e-b042-550498db3e2e","Type":"ContainerStarted","Data":"568ab6b7516797bc7d873ba699a53d65df7be853b4c9cdb3bd8e6017238ea689"} Jan 29 15:16:40 crc kubenswrapper[4620]: I0129 15:16:40.743350 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-llhnw" podUID="1f6f93a5-1485-481e-b042-550498db3e2e" containerName="registry-server" containerID="cri-o://568ab6b7516797bc7d873ba699a53d65df7be853b4c9cdb3bd8e6017238ea689" gracePeriod=2 Jan 29 15:16:40 crc kubenswrapper[4620]: I0129 15:16:40.793472 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-llhnw" podStartSLOduration=1.371335209 podStartE2EDuration="7.793450004s" podCreationTimestamp="2026-01-29 15:16:33 +0000 UTC" firstStartedPulling="2026-01-29 15:16:34.162696697 +0000 UTC m=+934.775524342" lastFinishedPulling="2026-01-29 15:16:40.584811492 +0000 UTC m=+941.197639137" observedRunningTime="2026-01-29 15:16:40.764442619 +0000 UTC m=+941.377270264" watchObservedRunningTime="2026-01-29 15:16:40.793450004 +0000 UTC m=+941.406277649" Jan 29 15:16:40 crc kubenswrapper[4620]: I0129 15:16:40.797681 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7dssf"] Jan 29 15:16:40 crc kubenswrapper[4620]: W0129 15:16:40.836242 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7466ed16_1885_44d8_b340_7e07fb51e497.slice/crio-a9ddb3940ef7d14f21eab75bd0e725cdba296ca04fe5eade5f7304a6ca8e86cc WatchSource:0}: Error finding container a9ddb3940ef7d14f21eab75bd0e725cdba296ca04fe5eade5f7304a6ca8e86cc: Status 404 returned error can't find the container with id a9ddb3940ef7d14f21eab75bd0e725cdba296ca04fe5eade5f7304a6ca8e86cc Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.158791 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-llhnw" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.273335 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlkdv\" (UniqueName: \"kubernetes.io/projected/1f6f93a5-1485-481e-b042-550498db3e2e-kube-api-access-nlkdv\") pod \"1f6f93a5-1485-481e-b042-550498db3e2e\" (UID: \"1f6f93a5-1485-481e-b042-550498db3e2e\") " Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.278598 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6f93a5-1485-481e-b042-550498db3e2e-kube-api-access-nlkdv" (OuterVolumeSpecName: "kube-api-access-nlkdv") pod "1f6f93a5-1485-481e-b042-550498db3e2e" (UID: "1f6f93a5-1485-481e-b042-550498db3e2e"). InnerVolumeSpecName "kube-api-access-nlkdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.374819 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlkdv\" (UniqueName: \"kubernetes.io/projected/1f6f93a5-1485-481e-b042-550498db3e2e-kube-api-access-nlkdv\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.766036 4620 generic.go:334] "Generic (PLEG): container finished" podID="1f6f93a5-1485-481e-b042-550498db3e2e" containerID="568ab6b7516797bc7d873ba699a53d65df7be853b4c9cdb3bd8e6017238ea689" exitCode=0 Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.766571 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-llhnw" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.766660 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-llhnw" event={"ID":"1f6f93a5-1485-481e-b042-550498db3e2e","Type":"ContainerDied","Data":"568ab6b7516797bc7d873ba699a53d65df7be853b4c9cdb3bd8e6017238ea689"} Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.766691 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-llhnw" event={"ID":"1f6f93a5-1485-481e-b042-550498db3e2e","Type":"ContainerDied","Data":"262fabcc792f95a67a3828cd3d3ca328d889c6f914b9e5264ec2e26871042dd0"} Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.766710 4620 scope.go:117] "RemoveContainer" containerID="568ab6b7516797bc7d873ba699a53d65df7be853b4c9cdb3bd8e6017238ea689" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.773910 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7dssf" event={"ID":"7466ed16-1885-44d8-b340-7e07fb51e497","Type":"ContainerStarted","Data":"083f1d79bec9a608ac0f2b13638365e23be59fcfc398c38871d85f4bd00b9475"} Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.774152 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7dssf" event={"ID":"7466ed16-1885-44d8-b340-7e07fb51e497","Type":"ContainerStarted","Data":"a9ddb3940ef7d14f21eab75bd0e725cdba296ca04fe5eade5f7304a6ca8e86cc"} Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.787479 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7dssf" podStartSLOduration=3.728733659 podStartE2EDuration="3.78745972s" podCreationTimestamp="2026-01-29 15:16:38 +0000 UTC" firstStartedPulling="2026-01-29 15:16:40.840337739 +0000 UTC m=+941.453165384" lastFinishedPulling="2026-01-29 15:16:40.8990638 +0000 UTC m=+941.511891445" observedRunningTime="2026-01-29 15:16:41.785959534 +0000 UTC m=+942.398787189" watchObservedRunningTime="2026-01-29 15:16:41.78745972 +0000 UTC m=+942.400287375" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.793830 4620 scope.go:117] "RemoveContainer" containerID="568ab6b7516797bc7d873ba699a53d65df7be853b4c9cdb3bd8e6017238ea689" Jan 29 15:16:41 crc kubenswrapper[4620]: E0129 15:16:41.794521 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"568ab6b7516797bc7d873ba699a53d65df7be853b4c9cdb3bd8e6017238ea689\": container with ID starting with 568ab6b7516797bc7d873ba699a53d65df7be853b4c9cdb3bd8e6017238ea689 not found: ID does not exist" containerID="568ab6b7516797bc7d873ba699a53d65df7be853b4c9cdb3bd8e6017238ea689" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.794557 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"568ab6b7516797bc7d873ba699a53d65df7be853b4c9cdb3bd8e6017238ea689"} err="failed to get container status \"568ab6b7516797bc7d873ba699a53d65df7be853b4c9cdb3bd8e6017238ea689\": rpc error: code = NotFound desc = could not find container \"568ab6b7516797bc7d873ba699a53d65df7be853b4c9cdb3bd8e6017238ea689\": container with ID starting with 568ab6b7516797bc7d873ba699a53d65df7be853b4c9cdb3bd8e6017238ea689 not found: ID does not exist" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.816958 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-llhnw"] Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.822922 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-llhnw"] Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.935967 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-drv9l"] Jan 29 15:16:41 crc kubenswrapper[4620]: E0129 15:16:41.936240 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6f93a5-1485-481e-b042-550498db3e2e" containerName="registry-server" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.936259 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6f93a5-1485-481e-b042-550498db3e2e" containerName="registry-server" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.936405 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6f93a5-1485-481e-b042-550498db3e2e" containerName="registry-server" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.937272 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.952879 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drv9l"] Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.982047 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be789be-3047-4e86-84ef-9c0345fff20d-utilities\") pod \"certified-operators-drv9l\" (UID: \"8be789be-3047-4e86-84ef-9c0345fff20d\") " pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.982188 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be789be-3047-4e86-84ef-9c0345fff20d-catalog-content\") pod \"certified-operators-drv9l\" (UID: \"8be789be-3047-4e86-84ef-9c0345fff20d\") " pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:16:41 crc kubenswrapper[4620]: I0129 15:16:41.982226 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4hll\" (UniqueName: \"kubernetes.io/projected/8be789be-3047-4e86-84ef-9c0345fff20d-kube-api-access-k4hll\") pod \"certified-operators-drv9l\" (UID: \"8be789be-3047-4e86-84ef-9c0345fff20d\") " pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:16:42 crc kubenswrapper[4620]: I0129 15:16:42.083478 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be789be-3047-4e86-84ef-9c0345fff20d-utilities\") pod \"certified-operators-drv9l\" (UID: \"8be789be-3047-4e86-84ef-9c0345fff20d\") " pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:16:42 crc kubenswrapper[4620]: I0129 15:16:42.083550 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be789be-3047-4e86-84ef-9c0345fff20d-catalog-content\") pod \"certified-operators-drv9l\" (UID: \"8be789be-3047-4e86-84ef-9c0345fff20d\") " pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:16:42 crc kubenswrapper[4620]: I0129 15:16:42.083569 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4hll\" (UniqueName: \"kubernetes.io/projected/8be789be-3047-4e86-84ef-9c0345fff20d-kube-api-access-k4hll\") pod \"certified-operators-drv9l\" (UID: \"8be789be-3047-4e86-84ef-9c0345fff20d\") " pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:16:42 crc kubenswrapper[4620]: I0129 15:16:42.084215 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be789be-3047-4e86-84ef-9c0345fff20d-utilities\") pod \"certified-operators-drv9l\" (UID: \"8be789be-3047-4e86-84ef-9c0345fff20d\") " pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:16:42 crc kubenswrapper[4620]: I0129 15:16:42.084301 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be789be-3047-4e86-84ef-9c0345fff20d-catalog-content\") pod \"certified-operators-drv9l\" (UID: \"8be789be-3047-4e86-84ef-9c0345fff20d\") " pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:16:42 crc kubenswrapper[4620]: I0129 15:16:42.101440 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4hll\" (UniqueName: \"kubernetes.io/projected/8be789be-3047-4e86-84ef-9c0345fff20d-kube-api-access-k4hll\") pod \"certified-operators-drv9l\" (UID: \"8be789be-3047-4e86-84ef-9c0345fff20d\") " pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:16:42 crc kubenswrapper[4620]: I0129 15:16:42.253982 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:16:42 crc kubenswrapper[4620]: I0129 15:16:42.758411 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drv9l"] Jan 29 15:16:42 crc kubenswrapper[4620]: W0129 15:16:42.765344 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8be789be_3047_4e86_84ef_9c0345fff20d.slice/crio-56cac9a5f67808e8425d121be32299c2341208866ad8ee2aa2a86daa87012a0f WatchSource:0}: Error finding container 56cac9a5f67808e8425d121be32299c2341208866ad8ee2aa2a86daa87012a0f: Status 404 returned error can't find the container with id 56cac9a5f67808e8425d121be32299c2341208866ad8ee2aa2a86daa87012a0f Jan 29 15:16:42 crc kubenswrapper[4620]: I0129 15:16:42.780259 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drv9l" event={"ID":"8be789be-3047-4e86-84ef-9c0345fff20d","Type":"ContainerStarted","Data":"56cac9a5f67808e8425d121be32299c2341208866ad8ee2aa2a86daa87012a0f"} Jan 29 15:16:42 crc kubenswrapper[4620]: I0129 15:16:42.879408 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f6f93a5-1485-481e-b042-550498db3e2e" path="/var/lib/kubelet/pods/1f6f93a5-1485-481e-b042-550498db3e2e/volumes" Jan 29 15:16:43 crc kubenswrapper[4620]: I0129 15:16:43.788130 4620 generic.go:334] "Generic (PLEG): container finished" podID="8be789be-3047-4e86-84ef-9c0345fff20d" containerID="cf0a711d0d596cecdbaf9d79a56f727d7544ad99f878c637a2d45b504144f292" exitCode=0 Jan 29 15:16:43 crc kubenswrapper[4620]: I0129 15:16:43.788189 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drv9l" event={"ID":"8be789be-3047-4e86-84ef-9c0345fff20d","Type":"ContainerDied","Data":"cf0a711d0d596cecdbaf9d79a56f727d7544ad99f878c637a2d45b504144f292"} Jan 29 15:16:43 crc kubenswrapper[4620]: E0129 15:16:43.917107 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:16:43 crc kubenswrapper[4620]: E0129 15:16:43.917297 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4hll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-drv9l_openshift-marketplace(8be789be-3047-4e86-84ef-9c0345fff20d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:16:43 crc kubenswrapper[4620]: E0129 15:16:43.918484 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:16:44 crc kubenswrapper[4620]: E0129 15:16:44.796672 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:16:46 crc kubenswrapper[4620]: E0129 15:16:46.876128 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:16:48 crc kubenswrapper[4620]: I0129 15:16:48.693812 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7dssf" Jan 29 15:16:48 crc kubenswrapper[4620]: I0129 15:16:48.694416 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7dssf" Jan 29 15:16:48 crc kubenswrapper[4620]: I0129 15:16:48.725881 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7dssf" Jan 29 15:16:48 crc kubenswrapper[4620]: I0129 15:16:48.884155 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7dssf" Jan 29 15:16:49 crc kubenswrapper[4620]: I0129 15:16:49.780173 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv"] Jan 29 15:16:49 crc kubenswrapper[4620]: I0129 15:16:49.781534 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" Jan 29 15:16:49 crc kubenswrapper[4620]: I0129 15:16:49.787621 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-kzhgl" Jan 29 15:16:49 crc kubenswrapper[4620]: I0129 15:16:49.800527 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv"] Jan 29 15:16:49 crc kubenswrapper[4620]: I0129 15:16:49.889003 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-bundle\") pod \"8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv\" (UID: \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\") " pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" Jan 29 15:16:49 crc kubenswrapper[4620]: I0129 15:16:49.889164 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-util\") pod \"8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv\" (UID: \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\") " pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" Jan 29 15:16:49 crc kubenswrapper[4620]: I0129 15:16:49.889315 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72wqb\" (UniqueName: \"kubernetes.io/projected/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-kube-api-access-72wqb\") pod \"8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv\" (UID: \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\") " pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" Jan 29 15:16:49 crc kubenswrapper[4620]: I0129 15:16:49.990340 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72wqb\" (UniqueName: \"kubernetes.io/projected/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-kube-api-access-72wqb\") pod \"8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv\" (UID: \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\") " pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" Jan 29 15:16:49 crc kubenswrapper[4620]: I0129 15:16:49.990434 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-bundle\") pod \"8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv\" (UID: \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\") " pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" Jan 29 15:16:49 crc kubenswrapper[4620]: I0129 15:16:49.990480 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-util\") pod \"8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv\" (UID: \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\") " pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" Jan 29 15:16:49 crc kubenswrapper[4620]: I0129 15:16:49.990968 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-util\") pod \"8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv\" (UID: \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\") " pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" Jan 29 15:16:49 crc kubenswrapper[4620]: I0129 15:16:49.991019 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-bundle\") pod \"8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv\" (UID: \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\") " pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" Jan 29 15:16:50 crc kubenswrapper[4620]: I0129 15:16:50.021612 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72wqb\" (UniqueName: \"kubernetes.io/projected/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-kube-api-access-72wqb\") pod \"8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv\" (UID: \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\") " pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" Jan 29 15:16:50 crc kubenswrapper[4620]: I0129 15:16:50.099730 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" Jan 29 15:16:50 crc kubenswrapper[4620]: I0129 15:16:50.592597 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv"] Jan 29 15:16:50 crc kubenswrapper[4620]: W0129 15:16:50.600921 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6235846_9e2e_4fd7_8e7d_d4eb2b56cb3f.slice/crio-af19a8d6a9d7c396abb247d568d3e4fa5ecfdb1dc683cb386ebf396f247fc877 WatchSource:0}: Error finding container af19a8d6a9d7c396abb247d568d3e4fa5ecfdb1dc683cb386ebf396f247fc877: Status 404 returned error can't find the container with id af19a8d6a9d7c396abb247d568d3e4fa5ecfdb1dc683cb386ebf396f247fc877 Jan 29 15:16:50 crc kubenswrapper[4620]: I0129 15:16:50.829523 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" event={"ID":"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f","Type":"ContainerStarted","Data":"719930480f74595bd9005ca0fd44aa600bee8fada5113f1d01d10b29261a2c59"} Jan 29 15:16:50 crc kubenswrapper[4620]: I0129 15:16:50.829570 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" event={"ID":"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f","Type":"ContainerStarted","Data":"af19a8d6a9d7c396abb247d568d3e4fa5ecfdb1dc683cb386ebf396f247fc877"} Jan 29 15:16:51 crc kubenswrapper[4620]: E0129 15:16:51.028366 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 29 15:16:51 crc kubenswrapper[4620]: E0129 15:16:51.028511 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:cp-frr-files,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/bin/sh -c cp -rLf /tmp/frr/* /etc/frr/],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:frr-startup,ReadOnly:false,MountPath:/tmp/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:frr-conf,ReadOnly:false,MountPath:/etc/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7md7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*100,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*101,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-h6cmz_metallb-system(e6981bff-0689-47b1-96f6-f265340449f9): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:16:51 crc kubenswrapper[4620]: E0129 15:16:51.029832 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:16:51 crc kubenswrapper[4620]: I0129 15:16:51.835364 4620 generic.go:334] "Generic (PLEG): container finished" podID="c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f" containerID="719930480f74595bd9005ca0fd44aa600bee8fada5113f1d01d10b29261a2c59" exitCode=0 Jan 29 15:16:51 crc kubenswrapper[4620]: I0129 15:16:51.835407 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" event={"ID":"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f","Type":"ContainerDied","Data":"719930480f74595bd9005ca0fd44aa600bee8fada5113f1d01d10b29261a2c59"} Jan 29 15:16:51 crc kubenswrapper[4620]: E0129 15:16:51.992383 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 29 15:16:51 crc kubenswrapper[4620]: E0129 15:16:51.992870 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:frr-k8s-webhook-server,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/frr-k8s],Args:[--log-level=debug --webhook-mode=onlywebhook --disable-cert-rotation=true --namespace=$(NAMESPACE) --metrics-bind-address=:7572],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:monitoring,HostPort:0,ContainerPort:7572,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pjjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000700000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-webhook-server-7df86c4f6c-klnxt_metallb-system(fd14327c-f27e-4e0b-9955-a9cc76e1c253): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:16:51 crc kubenswrapper[4620]: E0129 15:16:51.994178 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:16:51 crc kubenswrapper[4620]: E0129 15:16:51.994266 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:16:51 crc kubenswrapper[4620]: E0129 15:16:51.994426 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrq6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ql7jk_openshift-marketplace(4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:16:51 crc kubenswrapper[4620]: E0129 15:16:51.995622 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:16:54 crc kubenswrapper[4620]: I0129 15:16:54.854827 4620 generic.go:334] "Generic (PLEG): container finished" podID="c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f" containerID="a99a8ef6c6efd96f03f1b83fd524877132a024ca9bac482ab3bcacb653b9d816" exitCode=0 Jan 29 15:16:54 crc kubenswrapper[4620]: I0129 15:16:54.854926 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" event={"ID":"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f","Type":"ContainerDied","Data":"a99a8ef6c6efd96f03f1b83fd524877132a024ca9bac482ab3bcacb653b9d816"} Jan 29 15:16:55 crc kubenswrapper[4620]: I0129 15:16:55.863916 4620 generic.go:334] "Generic (PLEG): container finished" podID="c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f" containerID="1c4cc41b00d37f157190feef6c5eec586d6e0b09dc46c4bf8d8c5ce3a8f7c363" exitCode=0 Jan 29 15:16:55 crc kubenswrapper[4620]: I0129 15:16:55.864009 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" event={"ID":"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f","Type":"ContainerDied","Data":"1c4cc41b00d37f157190feef6c5eec586d6e0b09dc46c4bf8d8c5ce3a8f7c363"} Jan 29 15:16:57 crc kubenswrapper[4620]: I0129 15:16:57.185986 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" Jan 29 15:16:57 crc kubenswrapper[4620]: I0129 15:16:57.285602 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72wqb\" (UniqueName: \"kubernetes.io/projected/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-kube-api-access-72wqb\") pod \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\" (UID: \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\") " Jan 29 15:16:57 crc kubenswrapper[4620]: I0129 15:16:57.285739 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-bundle\") pod \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\" (UID: \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\") " Jan 29 15:16:57 crc kubenswrapper[4620]: I0129 15:16:57.285847 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-util\") pod \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\" (UID: \"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f\") " Jan 29 15:16:57 crc kubenswrapper[4620]: I0129 15:16:57.286346 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-bundle" (OuterVolumeSpecName: "bundle") pod "c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f" (UID: "c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:16:57 crc kubenswrapper[4620]: I0129 15:16:57.301005 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-util" (OuterVolumeSpecName: "util") pod "c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f" (UID: "c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:16:57 crc kubenswrapper[4620]: I0129 15:16:57.303937 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-kube-api-access-72wqb" (OuterVolumeSpecName: "kube-api-access-72wqb") pod "c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f" (UID: "c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f"). InnerVolumeSpecName "kube-api-access-72wqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:16:57 crc kubenswrapper[4620]: E0129 15:16:57.324189 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:16:57 crc kubenswrapper[4620]: E0129 15:16:57.324347 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4hll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-drv9l_openshift-marketplace(8be789be-3047-4e86-84ef-9c0345fff20d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:16:57 crc kubenswrapper[4620]: E0129 15:16:57.325514 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:16:57 crc kubenswrapper[4620]: I0129 15:16:57.387470 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72wqb\" (UniqueName: \"kubernetes.io/projected/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-kube-api-access-72wqb\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:57 crc kubenswrapper[4620]: I0129 15:16:57.387506 4620 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:57 crc kubenswrapper[4620]: I0129 15:16:57.387518 4620 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:57 crc kubenswrapper[4620]: I0129 15:16:57.875928 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" event={"ID":"c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f","Type":"ContainerDied","Data":"af19a8d6a9d7c396abb247d568d3e4fa5ecfdb1dc683cb386ebf396f247fc877"} Jan 29 15:16:57 crc kubenswrapper[4620]: I0129 15:16:57.875959 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af19a8d6a9d7c396abb247d568d3e4fa5ecfdb1dc683cb386ebf396f247fc877" Jan 29 15:16:57 crc kubenswrapper[4620]: I0129 15:16:57.876275 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv" Jan 29 15:17:00 crc kubenswrapper[4620]: I0129 15:17:00.845391 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-66776849dc-cjcst"] Jan 29 15:17:00 crc kubenswrapper[4620]: E0129 15:17:00.845983 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f" containerName="extract" Jan 29 15:17:00 crc kubenswrapper[4620]: I0129 15:17:00.846000 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f" containerName="extract" Jan 29 15:17:00 crc kubenswrapper[4620]: E0129 15:17:00.846017 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f" containerName="util" Jan 29 15:17:00 crc kubenswrapper[4620]: I0129 15:17:00.846024 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f" containerName="util" Jan 29 15:17:00 crc kubenswrapper[4620]: E0129 15:17:00.846046 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f" containerName="pull" Jan 29 15:17:00 crc kubenswrapper[4620]: I0129 15:17:00.846053 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f" containerName="pull" Jan 29 15:17:00 crc kubenswrapper[4620]: I0129 15:17:00.846182 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f" containerName="extract" Jan 29 15:17:00 crc kubenswrapper[4620]: I0129 15:17:00.846664 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-66776849dc-cjcst" Jan 29 15:17:00 crc kubenswrapper[4620]: I0129 15:17:00.853081 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-wf4xd" Jan 29 15:17:00 crc kubenswrapper[4620]: I0129 15:17:00.935501 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l46tl\" (UniqueName: \"kubernetes.io/projected/8c15503a-bec6-4b41-919e-067dd067232b-kube-api-access-l46tl\") pod \"openstack-operator-controller-init-66776849dc-cjcst\" (UID: \"8c15503a-bec6-4b41-919e-067dd067232b\") " pod="openstack-operators/openstack-operator-controller-init-66776849dc-cjcst" Jan 29 15:17:00 crc kubenswrapper[4620]: I0129 15:17:00.959902 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-66776849dc-cjcst"] Jan 29 15:17:00 crc kubenswrapper[4620]: E0129 15:17:00.996609 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:17:00 crc kubenswrapper[4620]: E0129 15:17:00.996790 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvbcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sg6jb_openshift-marketplace(83c6763d-3a90-4ded-b50c-57eb36ad1c0d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:17:00 crc kubenswrapper[4620]: E0129 15:17:00.998107 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:17:01 crc kubenswrapper[4620]: I0129 15:17:01.036492 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l46tl\" (UniqueName: \"kubernetes.io/projected/8c15503a-bec6-4b41-919e-067dd067232b-kube-api-access-l46tl\") pod \"openstack-operator-controller-init-66776849dc-cjcst\" (UID: \"8c15503a-bec6-4b41-919e-067dd067232b\") " pod="openstack-operators/openstack-operator-controller-init-66776849dc-cjcst" Jan 29 15:17:01 crc kubenswrapper[4620]: I0129 15:17:01.089184 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l46tl\" (UniqueName: \"kubernetes.io/projected/8c15503a-bec6-4b41-919e-067dd067232b-kube-api-access-l46tl\") pod \"openstack-operator-controller-init-66776849dc-cjcst\" (UID: \"8c15503a-bec6-4b41-919e-067dd067232b\") " pod="openstack-operators/openstack-operator-controller-init-66776849dc-cjcst" Jan 29 15:17:01 crc kubenswrapper[4620]: I0129 15:17:01.165958 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-66776849dc-cjcst" Jan 29 15:17:01 crc kubenswrapper[4620]: I0129 15:17:01.467744 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-66776849dc-cjcst"] Jan 29 15:17:01 crc kubenswrapper[4620]: I0129 15:17:01.474032 4620 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:17:01 crc kubenswrapper[4620]: I0129 15:17:01.916577 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-66776849dc-cjcst" event={"ID":"8c15503a-bec6-4b41-919e-067dd067232b","Type":"ContainerStarted","Data":"9248ae25138a28e5169884ed8af8e118a73f38aa1b5aac4f2e7360e39402f86a"} Jan 29 15:17:02 crc kubenswrapper[4620]: E0129 15:17:02.874272 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:17:04 crc kubenswrapper[4620]: I0129 15:17:04.111145 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:17:04 crc kubenswrapper[4620]: I0129 15:17:04.111537 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:17:05 crc kubenswrapper[4620]: E0129 15:17:05.413696 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:17:05 crc kubenswrapper[4620]: E0129 15:17:05.878973 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:17:09 crc kubenswrapper[4620]: I0129 15:17:09.964688 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-66776849dc-cjcst" event={"ID":"8c15503a-bec6-4b41-919e-067dd067232b","Type":"ContainerStarted","Data":"bc26e8cf11f1c5a5d36e31035fb76682348cc3fb223648836a9df8dcb19206d9"} Jan 29 15:17:10 crc kubenswrapper[4620]: E0129 15:17:10.879567 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:17:10 crc kubenswrapper[4620]: I0129 15:17:10.969160 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-66776849dc-cjcst" Jan 29 15:17:11 crc kubenswrapper[4620]: I0129 15:17:11.005676 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-66776849dc-cjcst" podStartSLOduration=2.757860469 podStartE2EDuration="11.00565292s" podCreationTimestamp="2026-01-29 15:17:00 +0000 UTC" firstStartedPulling="2026-01-29 15:17:01.473799717 +0000 UTC m=+962.086627352" lastFinishedPulling="2026-01-29 15:17:09.721592138 +0000 UTC m=+970.334419803" observedRunningTime="2026-01-29 15:17:11.001870491 +0000 UTC m=+971.614698146" watchObservedRunningTime="2026-01-29 15:17:11.00565292 +0000 UTC m=+971.618480575" Jan 29 15:17:13 crc kubenswrapper[4620]: E0129 15:17:13.873851 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:17:16 crc kubenswrapper[4620]: E0129 15:17:16.874622 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:17:17 crc kubenswrapper[4620]: E0129 15:17:17.873395 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:17:17 crc kubenswrapper[4620]: E0129 15:17:17.873725 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:17:21 crc kubenswrapper[4620]: I0129 15:17:21.168952 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-66776849dc-cjcst" Jan 29 15:17:23 crc kubenswrapper[4620]: E0129 15:17:23.004587 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:17:23 crc kubenswrapper[4620]: E0129 15:17:23.004745 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4hll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-drv9l_openshift-marketplace(8be789be-3047-4e86-84ef-9c0345fff20d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:17:23 crc kubenswrapper[4620]: E0129 15:17:23.005933 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:17:26 crc kubenswrapper[4620]: E0129 15:17:26.874466 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:17:28 crc kubenswrapper[4620]: E0129 15:17:28.877137 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:17:32 crc kubenswrapper[4620]: E0129 15:17:32.000786 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 29 15:17:32 crc kubenswrapper[4620]: E0129 15:17:32.001247 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:cp-frr-files,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/bin/sh -c cp -rLf /tmp/frr/* /etc/frr/],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:frr-startup,ReadOnly:false,MountPath:/tmp/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:frr-conf,ReadOnly:false,MountPath:/etc/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7md7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*100,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*101,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-h6cmz_metallb-system(e6981bff-0689-47b1-96f6-f265340449f9): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:17:32 crc kubenswrapper[4620]: E0129 15:17:32.002425 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:17:33 crc kubenswrapper[4620]: E0129 15:17:33.001477 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:17:33 crc kubenswrapper[4620]: E0129 15:17:33.001940 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrq6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ql7jk_openshift-marketplace(4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:17:33 crc kubenswrapper[4620]: E0129 15:17:33.003396 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:17:34 crc kubenswrapper[4620]: I0129 15:17:34.111544 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:17:34 crc kubenswrapper[4620]: I0129 15:17:34.111623 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:17:34 crc kubenswrapper[4620]: I0129 15:17:34.111692 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:17:34 crc kubenswrapper[4620]: I0129 15:17:34.112447 4620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"530d4b8825c86a1d226272eaddfd8776e92faf5ad624ab12e26dd0d4fe879bf7"} pod="openshift-machine-config-operator/machine-config-daemon-7469t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:17:34 crc kubenswrapper[4620]: I0129 15:17:34.112500 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" containerID="cri-o://530d4b8825c86a1d226272eaddfd8776e92faf5ad624ab12e26dd0d4fe879bf7" gracePeriod=600 Jan 29 15:17:34 crc kubenswrapper[4620]: E0129 15:17:34.873737 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:17:35 crc kubenswrapper[4620]: I0129 15:17:35.110427 4620 generic.go:334] "Generic (PLEG): container finished" podID="a76cce43-3d01-4158-b23a-e21fd5927792" containerID="530d4b8825c86a1d226272eaddfd8776e92faf5ad624ab12e26dd0d4fe879bf7" exitCode=0 Jan 29 15:17:35 crc kubenswrapper[4620]: I0129 15:17:35.110488 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerDied","Data":"530d4b8825c86a1d226272eaddfd8776e92faf5ad624ab12e26dd0d4fe879bf7"} Jan 29 15:17:35 crc kubenswrapper[4620]: I0129 15:17:35.110836 4620 scope.go:117] "RemoveContainer" containerID="66781c2a016809706f71009a78161c76f619ea542b24dd9a5d78b07cc0a0ddc6" Jan 29 15:17:37 crc kubenswrapper[4620]: I0129 15:17:37.125254 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"871cbbae8f526a267583740516b31e7ceb7222ae119d96420315de0c8548a400"} Jan 29 15:17:37 crc kubenswrapper[4620]: E0129 15:17:37.874640 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.623432 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d6fdb96dc-5r2sp"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.624383 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d6fdb96dc-5r2sp" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.642725 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-tl6vt" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.644136 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.644954 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.659940 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d6fdb96dc-5r2sp"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.670129 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.670277 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jkrk7" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.685854 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4swd\" (UniqueName: \"kubernetes.io/projected/eba5d75b-67b6-45b6-99a6-508fbbcd6fbc-kube-api-access-m4swd\") pod \"barbican-operator-controller-manager-7d6fdb96dc-5r2sp\" (UID: \"eba5d75b-67b6-45b6-99a6-508fbbcd6fbc\") " pod="openstack-operators/barbican-operator-controller-manager-7d6fdb96dc-5r2sp" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.685994 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58pfw\" (UniqueName: \"kubernetes.io/projected/cf77b77d-178f-45cc-854f-5ae0438eac47-kube-api-access-58pfw\") pod \"cinder-operator-controller-manager-858d89fd-mjknx\" (UID: \"cf77b77d-178f-45cc-854f-5ae0438eac47\") " pod="openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.699603 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.700435 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.705982 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-s6zwz" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.711968 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.712709 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.715635 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-95s56" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.757926 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.760789 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.772341 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-d8b84fbc-mhkm4"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.787616 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58pfw\" (UniqueName: \"kubernetes.io/projected/cf77b77d-178f-45cc-854f-5ae0438eac47-kube-api-access-58pfw\") pod \"cinder-operator-controller-manager-858d89fd-mjknx\" (UID: \"cf77b77d-178f-45cc-854f-5ae0438eac47\") " pod="openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.787683 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4swd\" (UniqueName: \"kubernetes.io/projected/eba5d75b-67b6-45b6-99a6-508fbbcd6fbc-kube-api-access-m4swd\") pod \"barbican-operator-controller-manager-7d6fdb96dc-5r2sp\" (UID: \"eba5d75b-67b6-45b6-99a6-508fbbcd6fbc\") " pod="openstack-operators/barbican-operator-controller-manager-7d6fdb96dc-5r2sp" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.787716 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvmhk\" (UniqueName: \"kubernetes.io/projected/66c3f803-5b87-4ac9-9673-55cfa299abda-kube-api-access-lvmhk\") pod \"designate-operator-controller-manager-dd77988f8-vpfmk\" (UID: \"66c3f803-5b87-4ac9-9673-55cfa299abda\") " pod="openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.787792 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tckzn\" (UniqueName: \"kubernetes.io/projected/0839a549-c728-40e1-bf59-d8eb6cefc3f2-kube-api-access-tckzn\") pod \"glance-operator-controller-manager-f8c4db9df-wdbkp\" (UID: \"0839a549-c728-40e1-bf59-d8eb6cefc3f2\") " pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.820442 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.821045 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-mhkm4" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.822232 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.829282 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-sc472" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.829441 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qqqf9" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.860946 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-d8b84fbc-mhkm4"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.889971 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tckzn\" (UniqueName: \"kubernetes.io/projected/0839a549-c728-40e1-bf59-d8eb6cefc3f2-kube-api-access-tckzn\") pod \"glance-operator-controller-manager-f8c4db9df-wdbkp\" (UID: \"0839a549-c728-40e1-bf59-d8eb6cefc3f2\") " pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.890327 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbm8b\" (UniqueName: \"kubernetes.io/projected/c0a5289e-e06f-456b-9dd9-ab08f6f3e2f7-kube-api-access-vbm8b\") pod \"heat-operator-controller-manager-d8b84fbc-mhkm4\" (UID: \"c0a5289e-e06f-456b-9dd9-ab08f6f3e2f7\") " pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-mhkm4" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.890380 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvmhk\" (UniqueName: \"kubernetes.io/projected/66c3f803-5b87-4ac9-9673-55cfa299abda-kube-api-access-lvmhk\") pod \"designate-operator-controller-manager-dd77988f8-vpfmk\" (UID: \"66c3f803-5b87-4ac9-9673-55cfa299abda\") " pod="openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.890433 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79gbs\" (UniqueName: \"kubernetes.io/projected/18e5dfa7-ffa3-4a45-b4bd-9b3e8f5e0f21-kube-api-access-79gbs\") pod \"horizon-operator-controller-manager-5fb775575f-gl2f9\" (UID: \"18e5dfa7-ffa3-4a45-b4bd-9b3e8f5e0f21\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.892090 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4swd\" (UniqueName: \"kubernetes.io/projected/eba5d75b-67b6-45b6-99a6-508fbbcd6fbc-kube-api-access-m4swd\") pod \"barbican-operator-controller-manager-7d6fdb96dc-5r2sp\" (UID: \"eba5d75b-67b6-45b6-99a6-508fbbcd6fbc\") " pod="openstack-operators/barbican-operator-controller-manager-7d6fdb96dc-5r2sp" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.898490 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58pfw\" (UniqueName: \"kubernetes.io/projected/cf77b77d-178f-45cc-854f-5ae0438eac47-kube-api-access-58pfw\") pod \"cinder-operator-controller-manager-858d89fd-mjknx\" (UID: \"cf77b77d-178f-45cc-854f-5ae0438eac47\") " pod="openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.900475 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.901107 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.902040 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.910708 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.911454 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.912487 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.912642 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qh8v6" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.913807 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-92wcc" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.926960 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.942786 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.944350 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d6fdb96dc-5r2sp" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.976176 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.980508 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.981554 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.983350 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-26dlq" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.989774 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-76c896469f-zjnc9"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.990988 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbm8b\" (UniqueName: \"kubernetes.io/projected/c0a5289e-e06f-456b-9dd9-ab08f6f3e2f7-kube-api-access-vbm8b\") pod \"heat-operator-controller-manager-d8b84fbc-mhkm4\" (UID: \"c0a5289e-e06f-456b-9dd9-ab08f6f3e2f7\") " pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-mhkm4" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.991049 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79gbs\" (UniqueName: \"kubernetes.io/projected/18e5dfa7-ffa3-4a45-b4bd-9b3e8f5e0f21-kube-api-access-79gbs\") pod \"horizon-operator-controller-manager-5fb775575f-gl2f9\" (UID: \"18e5dfa7-ffa3-4a45-b4bd-9b3e8f5e0f21\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.991074 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert\") pod \"infra-operator-controller-manager-79955696d6-zg6ls\" (UID: \"6ea8203c-2846-46ef-be3b-49596b6edc45\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.991096 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwjnw\" (UniqueName: \"kubernetes.io/projected/eb53cd1f-b0ca-41b1-a611-b71cc5ba71d5-kube-api-access-zwjnw\") pod \"ironic-operator-controller-manager-866c9d5b98-nvhfw\" (UID: \"eb53cd1f-b0ca-41b1-a611-b71cc5ba71d5\") " pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.991123 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwwdf\" (UniqueName: \"kubernetes.io/projected/379029bd-b764-49a9-b9d0-cdf5d69e2276-kube-api-access-jwwdf\") pod \"keystone-operator-controller-manager-7f9d69db65-fpcjf\" (UID: \"379029bd-b764-49a9-b9d0-cdf5d69e2276\") " pod="openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.991160 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj54v\" (UniqueName: \"kubernetes.io/projected/6ea8203c-2846-46ef-be3b-49596b6edc45-kube-api-access-sj54v\") pod \"infra-operator-controller-manager-79955696d6-zg6ls\" (UID: \"6ea8203c-2846-46ef-be3b-49596b6edc45\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.991253 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-76c896469f-zjnc9" Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.992422 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf"] Jan 29 15:17:38 crc kubenswrapper[4620]: I0129 15:17:38.999853 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-9jkx8" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.019945 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-76c896469f-zjnc9"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.028202 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79gbs\" (UniqueName: \"kubernetes.io/projected/18e5dfa7-ffa3-4a45-b4bd-9b3e8f5e0f21-kube-api-access-79gbs\") pod \"horizon-operator-controller-manager-5fb775575f-gl2f9\" (UID: \"18e5dfa7-ffa3-4a45-b4bd-9b3e8f5e0f21\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.035681 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tckzn\" (UniqueName: \"kubernetes.io/projected/0839a549-c728-40e1-bf59-d8eb6cefc3f2-kube-api-access-tckzn\") pod \"glance-operator-controller-manager-f8c4db9df-wdbkp\" (UID: \"0839a549-c728-40e1-bf59-d8eb6cefc3f2\") " pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.060306 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbm8b\" (UniqueName: \"kubernetes.io/projected/c0a5289e-e06f-456b-9dd9-ab08f6f3e2f7-kube-api-access-vbm8b\") pod \"heat-operator-controller-manager-d8b84fbc-mhkm4\" (UID: \"c0a5289e-e06f-456b-9dd9-ab08f6f3e2f7\") " pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-mhkm4" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.052835 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvmhk\" (UniqueName: \"kubernetes.io/projected/66c3f803-5b87-4ac9-9673-55cfa299abda-kube-api-access-lvmhk\") pod \"designate-operator-controller-manager-dd77988f8-vpfmk\" (UID: \"66c3f803-5b87-4ac9-9673-55cfa299abda\") " pod="openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.071033 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.072181 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.087570 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6nq4h" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.094208 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7t2m\" (UniqueName: \"kubernetes.io/projected/04f845cd-06d1-47cf-975c-5ad809f0734a-kube-api-access-b7t2m\") pod \"manila-operator-controller-manager-76c896469f-zjnc9\" (UID: \"04f845cd-06d1-47cf-975c-5ad809f0734a\") " pod="openstack-operators/manila-operator-controller-manager-76c896469f-zjnc9" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.094267 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert\") pod \"infra-operator-controller-manager-79955696d6-zg6ls\" (UID: \"6ea8203c-2846-46ef-be3b-49596b6edc45\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.094290 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwjnw\" (UniqueName: \"kubernetes.io/projected/eb53cd1f-b0ca-41b1-a611-b71cc5ba71d5-kube-api-access-zwjnw\") pod \"ironic-operator-controller-manager-866c9d5b98-nvhfw\" (UID: \"eb53cd1f-b0ca-41b1-a611-b71cc5ba71d5\") " pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.094322 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwwdf\" (UniqueName: \"kubernetes.io/projected/379029bd-b764-49a9-b9d0-cdf5d69e2276-kube-api-access-jwwdf\") pod \"keystone-operator-controller-manager-7f9d69db65-fpcjf\" (UID: \"379029bd-b764-49a9-b9d0-cdf5d69e2276\") " pod="openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.094356 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7wwx\" (UniqueName: \"kubernetes.io/projected/b5ef0cec-fba0-46b1-8410-cb3fd8551106-kube-api-access-x7wwx\") pod \"mariadb-operator-controller-manager-67bf948998-2ktvb\" (UID: \"b5ef0cec-fba0-46b1-8410-cb3fd8551106\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.094377 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj54v\" (UniqueName: \"kubernetes.io/projected/6ea8203c-2846-46ef-be3b-49596b6edc45-kube-api-access-sj54v\") pod \"infra-operator-controller-manager-79955696d6-zg6ls\" (UID: \"6ea8203c-2846-46ef-be3b-49596b6edc45\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:17:39 crc kubenswrapper[4620]: E0129 15:17:39.094987 4620 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:17:39 crc kubenswrapper[4620]: E0129 15:17:39.095024 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert podName:6ea8203c-2846-46ef-be3b-49596b6edc45 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:39.595009162 +0000 UTC m=+1000.207836807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert") pod "infra-operator-controller-manager-79955696d6-zg6ls" (UID: "6ea8203c-2846-46ef-be3b-49596b6edc45") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.097443 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.098320 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.101706 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9v9x5" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.121826 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.126932 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.135924 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-68cb478976-m69zs"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.137320 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-68cb478976-m69zs" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.145308 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwjnw\" (UniqueName: \"kubernetes.io/projected/eb53cd1f-b0ca-41b1-a611-b71cc5ba71d5-kube-api-access-zwjnw\") pod \"ironic-operator-controller-manager-866c9d5b98-nvhfw\" (UID: \"eb53cd1f-b0ca-41b1-a611-b71cc5ba71d5\") " pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.155638 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-jxrsx" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.155858 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.156723 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.158533 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwwdf\" (UniqueName: \"kubernetes.io/projected/379029bd-b764-49a9-b9d0-cdf5d69e2276-kube-api-access-jwwdf\") pod \"keystone-operator-controller-manager-7f9d69db65-fpcjf\" (UID: \"379029bd-b764-49a9-b9d0-cdf5d69e2276\") " pod="openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.159945 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-68cb478976-m69zs"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.162257 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xw4th" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.174991 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj54v\" (UniqueName: \"kubernetes.io/projected/6ea8203c-2846-46ef-be3b-49596b6edc45-kube-api-access-sj54v\") pod \"infra-operator-controller-manager-79955696d6-zg6ls\" (UID: \"6ea8203c-2846-46ef-be3b-49596b6edc45\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.175701 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.193591 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.194294 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-mhkm4" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.194826 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4h5q\" (UniqueName: \"kubernetes.io/projected/1fa9c6ba-6bfe-4c67-b27d-6697d9bc540d-kube-api-access-f4h5q\") pod \"octavia-operator-controller-manager-68f8cb846c-lwdxt\" (UID: \"1fa9c6ba-6bfe-4c67-b27d-6697d9bc540d\") " pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.201325 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.209100 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5lgp\" (UniqueName: \"kubernetes.io/projected/521acf13-0266-4f13-9744-0f789f922b31-kube-api-access-b5lgp\") pod \"nova-operator-controller-manager-68cb478976-m69zs\" (UID: \"521acf13-0266-4f13-9744-0f789f922b31\") " pod="openstack-operators/nova-operator-controller-manager-68cb478976-m69zs" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.209414 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7wwx\" (UniqueName: \"kubernetes.io/projected/b5ef0cec-fba0-46b1-8410-cb3fd8551106-kube-api-access-x7wwx\") pod \"mariadb-operator-controller-manager-67bf948998-2ktvb\" (UID: \"b5ef0cec-fba0-46b1-8410-cb3fd8551106\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.209550 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4f2x\" (UniqueName: \"kubernetes.io/projected/fcbc9624-faaa-4663-8392-7684b49a3d93-kube-api-access-x4f2x\") pod \"neutron-operator-controller-manager-7c7cc6ff45-jx2jt\" (UID: \"fcbc9624-faaa-4663-8392-7684b49a3d93\") " pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.209749 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7t2m\" (UniqueName: \"kubernetes.io/projected/04f845cd-06d1-47cf-975c-5ad809f0734a-kube-api-access-b7t2m\") pod \"manila-operator-controller-manager-76c896469f-zjnc9\" (UID: \"04f845cd-06d1-47cf-975c-5ad809f0734a\") " pod="openstack-operators/manila-operator-controller-manager-76c896469f-zjnc9" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.222248 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.223058 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.228380 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.228499 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cs82s" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.241854 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.242982 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.246856 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-2jxms" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.253128 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.262498 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7wwx\" (UniqueName: \"kubernetes.io/projected/b5ef0cec-fba0-46b1-8410-cb3fd8551106-kube-api-access-x7wwx\") pod \"mariadb-operator-controller-manager-67bf948998-2ktvb\" (UID: \"b5ef0cec-fba0-46b1-8410-cb3fd8551106\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.278226 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.283730 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.283781 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.284370 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.292265 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7t2m\" (UniqueName: \"kubernetes.io/projected/04f845cd-06d1-47cf-975c-5ad809f0734a-kube-api-access-b7t2m\") pod \"manila-operator-controller-manager-76c896469f-zjnc9\" (UID: \"04f845cd-06d1-47cf-975c-5ad809f0734a\") " pod="openstack-operators/manila-operator-controller-manager-76c896469f-zjnc9" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.300126 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6w9fd" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.301396 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.321098 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-z2wkd" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.335493 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4f2x\" (UniqueName: \"kubernetes.io/projected/fcbc9624-faaa-4663-8392-7684b49a3d93-kube-api-access-x4f2x\") pod \"neutron-operator-controller-manager-7c7cc6ff45-jx2jt\" (UID: \"fcbc9624-faaa-4663-8392-7684b49a3d93\") " pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.335571 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4h5q\" (UniqueName: \"kubernetes.io/projected/1fa9c6ba-6bfe-4c67-b27d-6697d9bc540d-kube-api-access-f4h5q\") pod \"octavia-operator-controller-manager-68f8cb846c-lwdxt\" (UID: \"1fa9c6ba-6bfe-4c67-b27d-6697d9bc540d\") " pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.335608 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5lgp\" (UniqueName: \"kubernetes.io/projected/521acf13-0266-4f13-9744-0f789f922b31-kube-api-access-b5lgp\") pod \"nova-operator-controller-manager-68cb478976-m69zs\" (UID: \"521acf13-0266-4f13-9744-0f789f922b31\") " pod="openstack-operators/nova-operator-controller-manager-68cb478976-m69zs" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.351094 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.351724 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.400624 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4h5q\" (UniqueName: \"kubernetes.io/projected/1fa9c6ba-6bfe-4c67-b27d-6697d9bc540d-kube-api-access-f4h5q\") pod \"octavia-operator-controller-manager-68f8cb846c-lwdxt\" (UID: \"1fa9c6ba-6bfe-4c67-b27d-6697d9bc540d\") " pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.416493 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4f2x\" (UniqueName: \"kubernetes.io/projected/fcbc9624-faaa-4663-8392-7684b49a3d93-kube-api-access-x4f2x\") pod \"neutron-operator-controller-manager-7c7cc6ff45-jx2jt\" (UID: \"fcbc9624-faaa-4663-8392-7684b49a3d93\") " pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.424227 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.427787 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.454860 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vr4c\" (UniqueName: \"kubernetes.io/projected/ba4a4cf7-578c-4426-b585-c51a610117dc-kube-api-access-7vr4c\") pod \"ovn-operator-controller-manager-788c46999f-rk4qn\" (UID: \"ba4a4cf7-578c-4426-b585-c51a610117dc\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.454946 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h\" (UID: \"c3484963-7f9a-47b3-b59d-9390033689b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.454984 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhjcw\" (UniqueName: \"kubernetes.io/projected/209e5c93-4fcb-45d3-8a50-25bfc9f954bd-kube-api-access-lhjcw\") pod \"placement-operator-controller-manager-5b964cf4cd-7gtnm\" (UID: \"209e5c93-4fcb-45d3-8a50-25bfc9f954bd\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.455003 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttxqp\" (UniqueName: \"kubernetes.io/projected/e50323a3-871d-41a8-b8b7-488d1fd62e6b-kube-api-access-ttxqp\") pod \"swift-operator-controller-manager-6f7455757b-vx5xt\" (UID: \"e50323a3-871d-41a8-b8b7-488d1fd62e6b\") " pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.455048 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkz5r\" (UniqueName: \"kubernetes.io/projected/c3484963-7f9a-47b3-b59d-9390033689b6-kube-api-access-jkz5r\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h\" (UID: \"c3484963-7f9a-47b3-b59d-9390033689b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.461443 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.472658 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-76c896469f-zjnc9" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.491857 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.492739 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.516239 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.520169 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.521312 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.533814 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-g96rr" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.534371 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-55rpb" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.543110 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5lgp\" (UniqueName: \"kubernetes.io/projected/521acf13-0266-4f13-9744-0f789f922b31-kube-api-access-b5lgp\") pod \"nova-operator-controller-manager-68cb478976-m69zs\" (UID: \"521acf13-0266-4f13-9744-0f789f922b31\") " pod="openstack-operators/nova-operator-controller-manager-68cb478976-m69zs" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.550583 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.561411 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h\" (UID: \"c3484963-7f9a-47b3-b59d-9390033689b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.561466 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhjcw\" (UniqueName: \"kubernetes.io/projected/209e5c93-4fcb-45d3-8a50-25bfc9f954bd-kube-api-access-lhjcw\") pod \"placement-operator-controller-manager-5b964cf4cd-7gtnm\" (UID: \"209e5c93-4fcb-45d3-8a50-25bfc9f954bd\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.561486 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttxqp\" (UniqueName: \"kubernetes.io/projected/e50323a3-871d-41a8-b8b7-488d1fd62e6b-kube-api-access-ttxqp\") pod \"swift-operator-controller-manager-6f7455757b-vx5xt\" (UID: \"e50323a3-871d-41a8-b8b7-488d1fd62e6b\") " pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.561540 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkz5r\" (UniqueName: \"kubernetes.io/projected/c3484963-7f9a-47b3-b59d-9390033689b6-kube-api-access-jkz5r\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h\" (UID: \"c3484963-7f9a-47b3-b59d-9390033689b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.561571 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vr4c\" (UniqueName: \"kubernetes.io/projected/ba4a4cf7-578c-4426-b585-c51a610117dc-kube-api-access-7vr4c\") pod \"ovn-operator-controller-manager-788c46999f-rk4qn\" (UID: \"ba4a4cf7-578c-4426-b585-c51a610117dc\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.561607 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mbnh\" (UniqueName: \"kubernetes.io/projected/e788f64b-fae0-41d0-931c-a707ff7b2221-kube-api-access-6mbnh\") pod \"test-operator-controller-manager-56f8bfcd9f-6xsvq\" (UID: \"e788f64b-fae0-41d0-931c-a707ff7b2221\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.561632 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9zck\" (UniqueName: \"kubernetes.io/projected/6c2400dd-889c-46eb-8d6a-d69e1859135d-kube-api-access-s9zck\") pod \"telemetry-operator-controller-manager-6cf8c44c7-999b7\" (UID: \"6c2400dd-889c-46eb-8d6a-d69e1859135d\") " pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7" Jan 29 15:17:39 crc kubenswrapper[4620]: E0129 15:17:39.561790 4620 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:17:39 crc kubenswrapper[4620]: E0129 15:17:39.561840 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert podName:c3484963-7f9a-47b3-b59d-9390033689b6 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:40.0618249 +0000 UTC m=+1000.674652545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" (UID: "c3484963-7f9a-47b3-b59d-9390033689b6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.588943 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-68cb478976-m69zs" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.593550 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.605346 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.606289 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.618436 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-bmpkk" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.619098 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttxqp\" (UniqueName: \"kubernetes.io/projected/e50323a3-871d-41a8-b8b7-488d1fd62e6b-kube-api-access-ttxqp\") pod \"swift-operator-controller-manager-6f7455757b-vx5xt\" (UID: \"e50323a3-871d-41a8-b8b7-488d1fd62e6b\") " pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.638806 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhjcw\" (UniqueName: \"kubernetes.io/projected/209e5c93-4fcb-45d3-8a50-25bfc9f954bd-kube-api-access-lhjcw\") pod \"placement-operator-controller-manager-5b964cf4cd-7gtnm\" (UID: \"209e5c93-4fcb-45d3-8a50-25bfc9f954bd\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.642247 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.642788 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vr4c\" (UniqueName: \"kubernetes.io/projected/ba4a4cf7-578c-4426-b585-c51a610117dc-kube-api-access-7vr4c\") pod \"ovn-operator-controller-manager-788c46999f-rk4qn\" (UID: \"ba4a4cf7-578c-4426-b585-c51a610117dc\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.643509 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkz5r\" (UniqueName: \"kubernetes.io/projected/c3484963-7f9a-47b3-b59d-9390033689b6-kube-api-access-jkz5r\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h\" (UID: \"c3484963-7f9a-47b3-b59d-9390033689b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.657799 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.662367 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert\") pod \"infra-operator-controller-manager-79955696d6-zg6ls\" (UID: \"6ea8203c-2846-46ef-be3b-49596b6edc45\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.662421 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mbnh\" (UniqueName: \"kubernetes.io/projected/e788f64b-fae0-41d0-931c-a707ff7b2221-kube-api-access-6mbnh\") pod \"test-operator-controller-manager-56f8bfcd9f-6xsvq\" (UID: \"e788f64b-fae0-41d0-931c-a707ff7b2221\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.662446 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9zck\" (UniqueName: \"kubernetes.io/projected/6c2400dd-889c-46eb-8d6a-d69e1859135d-kube-api-access-s9zck\") pod \"telemetry-operator-controller-manager-6cf8c44c7-999b7\" (UID: \"6c2400dd-889c-46eb-8d6a-d69e1859135d\") " pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.662485 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwxrn\" (UniqueName: \"kubernetes.io/projected/e2802504-ea02-427f-ab78-79e02a882726-kube-api-access-qwxrn\") pod \"watcher-operator-controller-manager-59f4c7d7c4-f8vlq\" (UID: \"e2802504-ea02-427f-ab78-79e02a882726\") " pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq" Jan 29 15:17:39 crc kubenswrapper[4620]: E0129 15:17:39.662608 4620 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:17:39 crc kubenswrapper[4620]: E0129 15:17:39.662645 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert podName:6ea8203c-2846-46ef-be3b-49596b6edc45 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:40.662630019 +0000 UTC m=+1001.275457664 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert") pod "infra-operator-controller-manager-79955696d6-zg6ls" (UID: "6ea8203c-2846-46ef-be3b-49596b6edc45") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.685996 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.718921 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.721503 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.741912 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.743054 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.762813 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.762868 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwxrn\" (UniqueName: \"kubernetes.io/projected/e2802504-ea02-427f-ab78-79e02a882726-kube-api-access-qwxrn\") pod \"watcher-operator-controller-manager-59f4c7d7c4-f8vlq\" (UID: \"e2802504-ea02-427f-ab78-79e02a882726\") " pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.762915 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljwqw\" (UniqueName: \"kubernetes.io/projected/e0487d66-d615-4562-b241-9d1424693de8-kube-api-access-ljwqw\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.762937 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.775243 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.776636 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mbnh\" (UniqueName: \"kubernetes.io/projected/e788f64b-fae0-41d0-931c-a707ff7b2221-kube-api-access-6mbnh\") pod \"test-operator-controller-manager-56f8bfcd9f-6xsvq\" (UID: \"e788f64b-fae0-41d0-931c-a707ff7b2221\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.776846 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.777052 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-z4hd4" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.777057 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.780919 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwxrn\" (UniqueName: \"kubernetes.io/projected/e2802504-ea02-427f-ab78-79e02a882726-kube-api-access-qwxrn\") pod \"watcher-operator-controller-manager-59f4c7d7c4-f8vlq\" (UID: \"e2802504-ea02-427f-ab78-79e02a882726\") " pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.797223 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46gn6"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.799085 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46gn6" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.810879 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jcqtd" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.816833 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46gn6"] Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.864425 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4m88\" (UniqueName: \"kubernetes.io/projected/7cf3c71a-0b56-4367-832f-71713e2684c9-kube-api-access-c4m88\") pod \"rabbitmq-cluster-operator-manager-668c99d594-46gn6\" (UID: \"7cf3c71a-0b56-4367-832f-71713e2684c9\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46gn6" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.864546 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljwqw\" (UniqueName: \"kubernetes.io/projected/e0487d66-d615-4562-b241-9d1424693de8-kube-api-access-ljwqw\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.864571 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.864671 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:39 crc kubenswrapper[4620]: E0129 15:17:39.864839 4620 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:17:39 crc kubenswrapper[4620]: E0129 15:17:39.864987 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs podName:e0487d66-d615-4562-b241-9d1424693de8 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:40.364921639 +0000 UTC m=+1000.977749304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs") pod "openstack-operator-controller-manager-65db95fbc9-lzmhp" (UID: "e0487d66-d615-4562-b241-9d1424693de8") : secret "webhook-server-cert" not found Jan 29 15:17:39 crc kubenswrapper[4620]: E0129 15:17:39.865255 4620 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:17:39 crc kubenswrapper[4620]: E0129 15:17:39.865434 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs podName:e0487d66-d615-4562-b241-9d1424693de8 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:40.365408474 +0000 UTC m=+1000.978236179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs") pod "openstack-operator-controller-manager-65db95fbc9-lzmhp" (UID: "e0487d66-d615-4562-b241-9d1424693de8") : secret "metrics-server-cert" not found Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.883149 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljwqw\" (UniqueName: \"kubernetes.io/projected/e0487d66-d615-4562-b241-9d1424693de8-kube-api-access-ljwqw\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.917886 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.926243 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.943188 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.965727 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4m88\" (UniqueName: \"kubernetes.io/projected/7cf3c71a-0b56-4367-832f-71713e2684c9-kube-api-access-c4m88\") pod \"rabbitmq-cluster-operator-manager-668c99d594-46gn6\" (UID: \"7cf3c71a-0b56-4367-832f-71713e2684c9\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46gn6" Jan 29 15:17:39 crc kubenswrapper[4620]: I0129 15:17:39.982010 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4m88\" (UniqueName: \"kubernetes.io/projected/7cf3c71a-0b56-4367-832f-71713e2684c9-kube-api-access-c4m88\") pod \"rabbitmq-cluster-operator-manager-668c99d594-46gn6\" (UID: \"7cf3c71a-0b56-4367-832f-71713e2684c9\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46gn6" Jan 29 15:17:40 crc kubenswrapper[4620]: I0129 15:17:40.012934 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d6fdb96dc-5r2sp"] Jan 29 15:17:40 crc kubenswrapper[4620]: I0129 15:17:40.066534 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h\" (UID: \"c3484963-7f9a-47b3-b59d-9390033689b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:17:40 crc kubenswrapper[4620]: E0129 15:17:40.067639 4620 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:17:40 crc kubenswrapper[4620]: E0129 15:17:40.067742 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert podName:c3484963-7f9a-47b3-b59d-9390033689b6 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:41.067722134 +0000 UTC m=+1001.680549779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" (UID: "c3484963-7f9a-47b3-b59d-9390033689b6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:17:40 crc kubenswrapper[4620]: I0129 15:17:40.121108 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46gn6" Jan 29 15:17:40 crc kubenswrapper[4620]: I0129 15:17:40.334879 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9zck\" (UniqueName: \"kubernetes.io/projected/6c2400dd-889c-46eb-8d6a-d69e1859135d-kube-api-access-s9zck\") pod \"telemetry-operator-controller-manager-6cf8c44c7-999b7\" (UID: \"6c2400dd-889c-46eb-8d6a-d69e1859135d\") " pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7" Jan 29 15:17:40 crc kubenswrapper[4620]: I0129 15:17:40.380381 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:40 crc kubenswrapper[4620]: I0129 15:17:40.380474 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:40 crc kubenswrapper[4620]: E0129 15:17:40.380632 4620 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:17:40 crc kubenswrapper[4620]: E0129 15:17:40.380679 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs podName:e0487d66-d615-4562-b241-9d1424693de8 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:41.380664215 +0000 UTC m=+1001.993491860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs") pod "openstack-operator-controller-manager-65db95fbc9-lzmhp" (UID: "e0487d66-d615-4562-b241-9d1424693de8") : secret "metrics-server-cert" not found Jan 29 15:17:40 crc kubenswrapper[4620]: E0129 15:17:40.380916 4620 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:17:40 crc kubenswrapper[4620]: E0129 15:17:40.381000 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs podName:e0487d66-d615-4562-b241-9d1424693de8 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:41.380979255 +0000 UTC m=+1001.993806980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs") pod "openstack-operator-controller-manager-65db95fbc9-lzmhp" (UID: "e0487d66-d615-4562-b241-9d1424693de8") : secret "webhook-server-cert" not found Jan 29 15:17:40 crc kubenswrapper[4620]: I0129 15:17:40.644103 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7" Jan 29 15:17:40 crc kubenswrapper[4620]: I0129 15:17:40.694943 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert\") pod \"infra-operator-controller-manager-79955696d6-zg6ls\" (UID: \"6ea8203c-2846-46ef-be3b-49596b6edc45\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:17:40 crc kubenswrapper[4620]: E0129 15:17:40.695112 4620 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:17:40 crc kubenswrapper[4620]: E0129 15:17:40.695166 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert podName:6ea8203c-2846-46ef-be3b-49596b6edc45 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:42.695148833 +0000 UTC m=+1003.307976478 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert") pod "infra-operator-controller-manager-79955696d6-zg6ls" (UID: "6ea8203c-2846-46ef-be3b-49596b6edc45") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:17:40 crc kubenswrapper[4620]: I0129 15:17:40.936200 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx"] Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.075171 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw"] Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.107436 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-d8b84fbc-mhkm4"] Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.147901 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h\" (UID: \"c3484963-7f9a-47b3-b59d-9390033689b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:17:41 crc kubenswrapper[4620]: E0129 15:17:41.148073 4620 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:17:41 crc kubenswrapper[4620]: E0129 15:17:41.148123 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert podName:c3484963-7f9a-47b3-b59d-9390033689b6 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:43.148108364 +0000 UTC m=+1003.760936009 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" (UID: "c3484963-7f9a-47b3-b59d-9390033689b6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.189083 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d6fdb96dc-5r2sp" event={"ID":"eba5d75b-67b6-45b6-99a6-508fbbcd6fbc","Type":"ContainerStarted","Data":"5356efc51911f029a9e0713827a4e19a2c03b790dd9fac469f6938be2af6a4f1"} Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.192586 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx" event={"ID":"cf77b77d-178f-45cc-854f-5ae0438eac47","Type":"ContainerStarted","Data":"a02d75c18b678686bcb7bda3bb650ff9e44d1bc3ee395a0ab7fe239bfed9a767"} Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.211162 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw" event={"ID":"eb53cd1f-b0ca-41b1-a611-b71cc5ba71d5","Type":"ContainerStarted","Data":"88f7408f9198b73cda2ebb819cfc79a8e598fab42d88109f9b69f0e1e6e22869"} Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.420804 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9"] Jan 29 15:17:41 crc kubenswrapper[4620]: W0129 15:17:41.429686 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18e5dfa7_ffa3_4a45_b4bd_9b3e8f5e0f21.slice/crio-ad2f2843c6074f397b506dc7b4efa187d626963a3fb7501ae0f4617b1f4e2975 WatchSource:0}: Error finding container ad2f2843c6074f397b506dc7b4efa187d626963a3fb7501ae0f4617b1f4e2975: Status 404 returned error can't find the container with id ad2f2843c6074f397b506dc7b4efa187d626963a3fb7501ae0f4617b1f4e2975 Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.451346 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.451471 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:41 crc kubenswrapper[4620]: E0129 15:17:41.452291 4620 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:17:41 crc kubenswrapper[4620]: E0129 15:17:41.452376 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs podName:e0487d66-d615-4562-b241-9d1424693de8 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:43.452352269 +0000 UTC m=+1004.065179914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs") pod "openstack-operator-controller-manager-65db95fbc9-lzmhp" (UID: "e0487d66-d615-4562-b241-9d1424693de8") : secret "metrics-server-cert" not found Jan 29 15:17:41 crc kubenswrapper[4620]: E0129 15:17:41.452440 4620 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:17:41 crc kubenswrapper[4620]: E0129 15:17:41.452491 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs podName:e0487d66-d615-4562-b241-9d1424693de8 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:43.452478163 +0000 UTC m=+1004.065305808 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs") pod "openstack-operator-controller-manager-65db95fbc9-lzmhp" (UID: "e0487d66-d615-4562-b241-9d1424693de8") : secret "webhook-server-cert" not found Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.491044 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp"] Jan 29 15:17:41 crc kubenswrapper[4620]: W0129 15:17:41.493530 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0839a549_c728_40e1_bf59_d8eb6cefc3f2.slice/crio-f73f190bcb3aa65dcc43f48538b9ac5844c4469ac5c71bb0bb8c420105809f87 WatchSource:0}: Error finding container f73f190bcb3aa65dcc43f48538b9ac5844c4469ac5c71bb0bb8c420105809f87: Status 404 returned error can't find the container with id f73f190bcb3aa65dcc43f48538b9ac5844c4469ac5c71bb0bb8c420105809f87 Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.511925 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk"] Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.766896 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-68cb478976-m69zs"] Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.770382 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf"] Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.778831 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb"] Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.836017 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-76c896469f-zjnc9"] Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.920701 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq"] Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.927848 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn"] Jan 29 15:17:41 crc kubenswrapper[4620]: I0129 15:17:41.935624 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm"] Jan 29 15:17:42 crc kubenswrapper[4620]: W0129 15:17:42.039041 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2802504_ea02_427f_ab78_79e02a882726.slice/crio-7037e4eed5ff63ff7a9792e39be2da3eab84303d12aebaff32bf4d6424edf82f WatchSource:0}: Error finding container 7037e4eed5ff63ff7a9792e39be2da3eab84303d12aebaff32bf4d6424edf82f: Status 404 returned error can't find the container with id 7037e4eed5ff63ff7a9792e39be2da3eab84303d12aebaff32bf4d6424edf82f Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.099695 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt"] Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.128809 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7"] Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.134845 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq"] Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.137928 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt"] Jan 29 15:17:42 crc kubenswrapper[4620]: W0129 15:17:42.160009 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode788f64b_fae0_41d0_931c_a707ff7b2221.slice/crio-1702269d0abb31fb80d2676536afc14de52ff206d992bf32e0afecfec01b2f44 WatchSource:0}: Error finding container 1702269d0abb31fb80d2676536afc14de52ff206d992bf32e0afecfec01b2f44: Status 404 returned error can't find the container with id 1702269d0abb31fb80d2676536afc14de52ff206d992bf32e0afecfec01b2f44 Jan 29 15:17:42 crc kubenswrapper[4620]: E0129 15:17:42.182194 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:61e700ea66730db00f31cb2a89fcd49bb919f246027c414e509166c1cab8429c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4h5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68f8cb846c-lwdxt_openstack-operators(1fa9c6ba-6bfe-4c67-b27d-6697d9bc540d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:17:42 crc kubenswrapper[4620]: E0129 15:17:42.182506 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6mbnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-6xsvq_openstack-operators(e788f64b-fae0-41d0-931c-a707ff7b2221): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:17:42 crc kubenswrapper[4620]: E0129 15:17:42.183969 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" podUID="e788f64b-fae0-41d0-931c-a707ff7b2221" Jan 29 15:17:42 crc kubenswrapper[4620]: E0129 15:17:42.189851 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt" podUID="1fa9c6ba-6bfe-4c67-b27d-6697d9bc540d" Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.213293 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46gn6"] Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.242133 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9" event={"ID":"18e5dfa7-ffa3-4a45-b4bd-9b3e8f5e0f21","Type":"ContainerStarted","Data":"ad2f2843c6074f397b506dc7b4efa187d626963a3fb7501ae0f4617b1f4e2975"} Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.255418 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" event={"ID":"e788f64b-fae0-41d0-931c-a707ff7b2221","Type":"ContainerStarted","Data":"1702269d0abb31fb80d2676536afc14de52ff206d992bf32e0afecfec01b2f44"} Jan 29 15:17:42 crc kubenswrapper[4620]: E0129 15:17:42.256485 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" podUID="e788f64b-fae0-41d0-931c-a707ff7b2221" Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.258716 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb" event={"ID":"b5ef0cec-fba0-46b1-8410-cb3fd8551106","Type":"ContainerStarted","Data":"46a6de919b95acdee1bfde5d748c911aece41d38729817ac6a3ee4dd28d76ffa"} Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.259783 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt" event={"ID":"fcbc9624-faaa-4663-8392-7684b49a3d93","Type":"ContainerStarted","Data":"2bb81ce45e8fe8744ab53ab97018c63f0809c44171f899b2e184fe7c02d9b656"} Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.262568 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp" event={"ID":"0839a549-c728-40e1-bf59-d8eb6cefc3f2","Type":"ContainerStarted","Data":"f73f190bcb3aa65dcc43f48538b9ac5844c4469ac5c71bb0bb8c420105809f87"} Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.264981 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7" event={"ID":"6c2400dd-889c-46eb-8d6a-d69e1859135d","Type":"ContainerStarted","Data":"b15a9d2a97f2024e16bcb96e31b2d245bedce977e6dc590bff8891a6a810fe0d"} Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.266093 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-mhkm4" event={"ID":"c0a5289e-e06f-456b-9dd9-ab08f6f3e2f7","Type":"ContainerStarted","Data":"bce9bdc90caca6416b7817cd8b7f439302d45ce2ae828f1f6cc116b45fe38d5d"} Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.267400 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq" event={"ID":"e2802504-ea02-427f-ab78-79e02a882726","Type":"ContainerStarted","Data":"7037e4eed5ff63ff7a9792e39be2da3eab84303d12aebaff32bf4d6424edf82f"} Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.269484 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm" event={"ID":"209e5c93-4fcb-45d3-8a50-25bfc9f954bd","Type":"ContainerStarted","Data":"5a78ff9f2e3c121401a0a96c66f0726b1fda297357ad8db3639e7df56d511ec1"} Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.272605 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk" event={"ID":"66c3f803-5b87-4ac9-9673-55cfa299abda","Type":"ContainerStarted","Data":"e6adc88f61c7eccfacc303e3d71001eaa2b081e34b2925c1c0f6c63a3a9670b8"} Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.274396 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-68cb478976-m69zs" event={"ID":"521acf13-0266-4f13-9744-0f789f922b31","Type":"ContainerStarted","Data":"f89d646255a7ea5af87e442f9f91c0aad82ff519a99153f8ca6ab24b9a9a9978"} Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.277702 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf" event={"ID":"379029bd-b764-49a9-b9d0-cdf5d69e2276","Type":"ContainerStarted","Data":"48e85eec17e8566e7a135aef5f14e55ec9949d6c36f6fb002b3fe9121deea434"} Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.280913 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-76c896469f-zjnc9" event={"ID":"04f845cd-06d1-47cf-975c-5ad809f0734a","Type":"ContainerStarted","Data":"b9c2cd7365e53ccd49585c13a361100d12be5854def9b51cc4eb09327b0d2813"} Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.288942 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt" event={"ID":"1fa9c6ba-6bfe-4c67-b27d-6697d9bc540d","Type":"ContainerStarted","Data":"b527dc76570e4f443081e1101075c4c970ab995390d41b110f9ee090bb9aa477"} Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.312754 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn" event={"ID":"ba4a4cf7-578c-4426-b585-c51a610117dc","Type":"ContainerStarted","Data":"4bb70c07c741a3bf3848c54199e95ecd0335ae08221d6a1f44bfadca340b0389"} Jan 29 15:17:42 crc kubenswrapper[4620]: E0129 15:17:42.317154 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:61e700ea66730db00f31cb2a89fcd49bb919f246027c414e509166c1cab8429c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt" podUID="1fa9c6ba-6bfe-4c67-b27d-6697d9bc540d" Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.357310 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt"] Jan 29 15:17:42 crc kubenswrapper[4620]: E0129 15:17:42.376600 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttxqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6f7455757b-vx5xt_openstack-operators(e50323a3-871d-41a8-b8b7-488d1fd62e6b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:17:42 crc kubenswrapper[4620]: E0129 15:17:42.379621 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" podUID="e50323a3-871d-41a8-b8b7-488d1fd62e6b" Jan 29 15:17:42 crc kubenswrapper[4620]: I0129 15:17:42.720615 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert\") pod \"infra-operator-controller-manager-79955696d6-zg6ls\" (UID: \"6ea8203c-2846-46ef-be3b-49596b6edc45\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:17:42 crc kubenswrapper[4620]: E0129 15:17:42.720786 4620 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:17:42 crc kubenswrapper[4620]: E0129 15:17:42.720841 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert podName:6ea8203c-2846-46ef-be3b-49596b6edc45 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:46.720824779 +0000 UTC m=+1007.333652434 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert") pod "infra-operator-controller-manager-79955696d6-zg6ls" (UID: "6ea8203c-2846-46ef-be3b-49596b6edc45") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:17:43 crc kubenswrapper[4620]: I0129 15:17:43.228171 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h\" (UID: \"c3484963-7f9a-47b3-b59d-9390033689b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:17:43 crc kubenswrapper[4620]: E0129 15:17:43.228460 4620 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:17:43 crc kubenswrapper[4620]: E0129 15:17:43.228598 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert podName:c3484963-7f9a-47b3-b59d-9390033689b6 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:47.228584011 +0000 UTC m=+1007.841411656 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" (UID: "c3484963-7f9a-47b3-b59d-9390033689b6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:17:43 crc kubenswrapper[4620]: I0129 15:17:43.322338 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" event={"ID":"e50323a3-871d-41a8-b8b7-488d1fd62e6b","Type":"ContainerStarted","Data":"c3c590224afa285dde0a0e24180050e1122a1e66226efbb96e0c525597de4ac9"} Jan 29 15:17:43 crc kubenswrapper[4620]: E0129 15:17:43.324429 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" podUID="e50323a3-871d-41a8-b8b7-488d1fd62e6b" Jan 29 15:17:43 crc kubenswrapper[4620]: I0129 15:17:43.325087 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46gn6" event={"ID":"7cf3c71a-0b56-4367-832f-71713e2684c9","Type":"ContainerStarted","Data":"1b325b910812629e4226898cdf1b77b53cdc97bd6d98e03ae62f45c01fbd6ff6"} Jan 29 15:17:43 crc kubenswrapper[4620]: E0129 15:17:43.329137 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" podUID="e788f64b-fae0-41d0-931c-a707ff7b2221" Jan 29 15:17:43 crc kubenswrapper[4620]: E0129 15:17:43.329151 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:61e700ea66730db00f31cb2a89fcd49bb919f246027c414e509166c1cab8429c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt" podUID="1fa9c6ba-6bfe-4c67-b27d-6697d9bc540d" Jan 29 15:17:43 crc kubenswrapper[4620]: I0129 15:17:43.533485 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:43 crc kubenswrapper[4620]: I0129 15:17:43.533606 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:43 crc kubenswrapper[4620]: E0129 15:17:43.533630 4620 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:17:43 crc kubenswrapper[4620]: E0129 15:17:43.533692 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs podName:e0487d66-d615-4562-b241-9d1424693de8 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:47.533676623 +0000 UTC m=+1008.146504268 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs") pod "openstack-operator-controller-manager-65db95fbc9-lzmhp" (UID: "e0487d66-d615-4562-b241-9d1424693de8") : secret "metrics-server-cert" not found Jan 29 15:17:43 crc kubenswrapper[4620]: E0129 15:17:43.533809 4620 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:17:43 crc kubenswrapper[4620]: E0129 15:17:43.533891 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs podName:e0487d66-d615-4562-b241-9d1424693de8 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:47.533869169 +0000 UTC m=+1008.146696905 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs") pod "openstack-operator-controller-manager-65db95fbc9-lzmhp" (UID: "e0487d66-d615-4562-b241-9d1424693de8") : secret "webhook-server-cert" not found Jan 29 15:17:43 crc kubenswrapper[4620]: E0129 15:17:43.874477 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:17:43 crc kubenswrapper[4620]: E0129 15:17:43.996075 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 29 15:17:43 crc kubenswrapper[4620]: E0129 15:17:43.996242 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:frr-k8s-webhook-server,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/frr-k8s],Args:[--log-level=debug --webhook-mode=onlywebhook --disable-cert-rotation=true --namespace=$(NAMESPACE) --metrics-bind-address=:7572],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:monitoring,HostPort:0,ContainerPort:7572,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pjjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000700000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-webhook-server-7df86c4f6c-klnxt_metallb-system(fd14327c-f27e-4e0b-9955-a9cc76e1c253): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:17:43 crc kubenswrapper[4620]: E0129 15:17:43.998200 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:17:44 crc kubenswrapper[4620]: E0129 15:17:44.337679 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" podUID="e50323a3-871d-41a8-b8b7-488d1fd62e6b" Jan 29 15:17:46 crc kubenswrapper[4620]: E0129 15:17:46.262728 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:17:46 crc kubenswrapper[4620]: I0129 15:17:46.787696 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert\") pod \"infra-operator-controller-manager-79955696d6-zg6ls\" (UID: \"6ea8203c-2846-46ef-be3b-49596b6edc45\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:17:46 crc kubenswrapper[4620]: E0129 15:17:46.793766 4620 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:17:46 crc kubenswrapper[4620]: E0129 15:17:46.793848 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert podName:6ea8203c-2846-46ef-be3b-49596b6edc45 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:54.793830943 +0000 UTC m=+1015.406658588 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert") pod "infra-operator-controller-manager-79955696d6-zg6ls" (UID: "6ea8203c-2846-46ef-be3b-49596b6edc45") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:17:47 crc kubenswrapper[4620]: I0129 15:17:47.295411 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h\" (UID: \"c3484963-7f9a-47b3-b59d-9390033689b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:17:47 crc kubenswrapper[4620]: E0129 15:17:47.295580 4620 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:17:47 crc kubenswrapper[4620]: E0129 15:17:47.295668 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert podName:c3484963-7f9a-47b3-b59d-9390033689b6 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:55.295645348 +0000 UTC m=+1015.908473053 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" (UID: "c3484963-7f9a-47b3-b59d-9390033689b6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:17:47 crc kubenswrapper[4620]: I0129 15:17:47.599203 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:47 crc kubenswrapper[4620]: I0129 15:17:47.599296 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:47 crc kubenswrapper[4620]: E0129 15:17:47.599436 4620 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:17:47 crc kubenswrapper[4620]: E0129 15:17:47.599481 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs podName:e0487d66-d615-4562-b241-9d1424693de8 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:55.59946826 +0000 UTC m=+1016.212295905 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs") pod "openstack-operator-controller-manager-65db95fbc9-lzmhp" (UID: "e0487d66-d615-4562-b241-9d1424693de8") : secret "webhook-server-cert" not found Jan 29 15:17:47 crc kubenswrapper[4620]: E0129 15:17:47.600833 4620 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:17:47 crc kubenswrapper[4620]: E0129 15:17:47.600869 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs podName:e0487d66-d615-4562-b241-9d1424693de8 nodeName:}" failed. No retries permitted until 2026-01-29 15:17:55.600860864 +0000 UTC m=+1016.213688509 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs") pod "openstack-operator-controller-manager-65db95fbc9-lzmhp" (UID: "e0487d66-d615-4562-b241-9d1424693de8") : secret "metrics-server-cert" not found Jan 29 15:17:48 crc kubenswrapper[4620]: E0129 15:17:48.476954 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:17:53 crc kubenswrapper[4620]: E0129 15:17:53.838373 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:17:53 crc kubenswrapper[4620]: E0129 15:17:53.839689 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvbcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sg6jb_openshift-marketplace(83c6763d-3a90-4ded-b50c-57eb36ad1c0d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:17:53 crc kubenswrapper[4620]: E0129 15:17:53.840923 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:17:54 crc kubenswrapper[4620]: I0129 15:17:54.807252 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert\") pod \"infra-operator-controller-manager-79955696d6-zg6ls\" (UID: \"6ea8203c-2846-46ef-be3b-49596b6edc45\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:17:54 crc kubenswrapper[4620]: E0129 15:17:54.807497 4620 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:17:54 crc kubenswrapper[4620]: E0129 15:17:54.807552 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert podName:6ea8203c-2846-46ef-be3b-49596b6edc45 nodeName:}" failed. No retries permitted until 2026-01-29 15:18:10.807535325 +0000 UTC m=+1031.420362970 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert") pod "infra-operator-controller-manager-79955696d6-zg6ls" (UID: "6ea8203c-2846-46ef-be3b-49596b6edc45") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:17:55 crc kubenswrapper[4620]: I0129 15:17:55.315869 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h\" (UID: \"c3484963-7f9a-47b3-b59d-9390033689b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:17:55 crc kubenswrapper[4620]: E0129 15:17:55.316019 4620 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:17:55 crc kubenswrapper[4620]: E0129 15:17:55.316075 4620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert podName:c3484963-7f9a-47b3-b59d-9390033689b6 nodeName:}" failed. No retries permitted until 2026-01-29 15:18:11.316059593 +0000 UTC m=+1031.928887238 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" (UID: "c3484963-7f9a-47b3-b59d-9390033689b6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:17:55 crc kubenswrapper[4620]: I0129 15:17:55.621018 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:55 crc kubenswrapper[4620]: I0129 15:17:55.621135 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:55 crc kubenswrapper[4620]: I0129 15:17:55.627587 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-metrics-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:55 crc kubenswrapper[4620]: I0129 15:17:55.628323 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e0487d66-d615-4562-b241-9d1424693de8-webhook-certs\") pod \"openstack-operator-controller-manager-65db95fbc9-lzmhp\" (UID: \"e0487d66-d615-4562-b241-9d1424693de8\") " pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:55 crc kubenswrapper[4620]: I0129 15:17:55.692793 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:17:57 crc kubenswrapper[4620]: E0129 15:17:57.873620 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:17:57 crc kubenswrapper[4620]: E0129 15:17:57.874012 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:17:59 crc kubenswrapper[4620]: E0129 15:17:59.968018 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:17:59 crc kubenswrapper[4620]: E0129 15:17:59.968359 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:18:04 crc kubenswrapper[4620]: E0129 15:18:04.760153 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/cinder-operator@sha256:9a564938039ddc2270feaa565a444c70c1d0d55906006ea88830f48cd4ed862b" Jan 29 15:18:04 crc kubenswrapper[4620]: E0129 15:18:04.760793 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/cinder-operator@sha256:9a564938039ddc2270feaa565a444c70c1d0d55906006ea88830f48cd4ed862b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58pfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-858d89fd-mjknx_openstack-operators(cf77b77d-178f-45cc-854f-5ae0438eac47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:04 crc kubenswrapper[4620]: E0129 15:18:04.762215 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx" podUID="cf77b77d-178f-45cc-854f-5ae0438eac47" Jan 29 15:18:05 crc kubenswrapper[4620]: E0129 15:18:05.487854 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/cinder-operator@sha256:9a564938039ddc2270feaa565a444c70c1d0d55906006ea88830f48cd4ed862b\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx" podUID="cf77b77d-178f-45cc-854f-5ae0438eac47" Jan 29 15:18:06 crc kubenswrapper[4620]: E0129 15:18:06.310524 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/ironic-operator@sha256:d5166d67cfb571a8b84635a479d0fada7a1f0698ebf1549b7e55e6689e4ecb48" Jan 29 15:18:06 crc kubenswrapper[4620]: E0129 15:18:06.310744 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:d5166d67cfb571a8b84635a479d0fada7a1f0698ebf1549b7e55e6689e4ecb48,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zwjnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-866c9d5b98-nvhfw_openstack-operators(eb53cd1f-b0ca-41b1-a611-b71cc5ba71d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:06 crc kubenswrapper[4620]: E0129 15:18:06.311951 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw" podUID="eb53cd1f-b0ca-41b1-a611-b71cc5ba71d5" Jan 29 15:18:06 crc kubenswrapper[4620]: E0129 15:18:06.502201 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:d5166d67cfb571a8b84635a479d0fada7a1f0698ebf1549b7e55e6689e4ecb48\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw" podUID="eb53cd1f-b0ca-41b1-a611-b71cc5ba71d5" Jan 29 15:18:08 crc kubenswrapper[4620]: E0129 15:18:08.755364 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:18:08 crc kubenswrapper[4620]: E0129 15:18:08.785627 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 29 15:18:08 crc kubenswrapper[4620]: E0129 15:18:08.785845 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lhjcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-7gtnm_openstack-operators(209e5c93-4fcb-45d3-8a50-25bfc9f954bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:08 crc kubenswrapper[4620]: E0129 15:18:08.787095 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm" podUID="209e5c93-4fcb-45d3-8a50-25bfc9f954bd" Jan 29 15:18:08 crc kubenswrapper[4620]: E0129 15:18:08.874865 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:18:08 crc kubenswrapper[4620]: E0129 15:18:08.876890 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:18:09 crc kubenswrapper[4620]: E0129 15:18:09.455261 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/glance-operator@sha256:ebb3f9f6e871da3fdfdefdf4040964abcdc5f4c7dac961a27c85a80f37866f00" Jan 29 15:18:09 crc kubenswrapper[4620]: E0129 15:18:09.455442 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/glance-operator@sha256:ebb3f9f6e871da3fdfdefdf4040964abcdc5f4c7dac961a27c85a80f37866f00,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tckzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-f8c4db9df-wdbkp_openstack-operators(0839a549-c728-40e1-bf59-d8eb6cefc3f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:09 crc kubenswrapper[4620]: E0129 15:18:09.456574 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp" podUID="0839a549-c728-40e1-bf59-d8eb6cefc3f2" Jan 29 15:18:09 crc kubenswrapper[4620]: E0129 15:18:09.511086 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm" podUID="209e5c93-4fcb-45d3-8a50-25bfc9f954bd" Jan 29 15:18:09 crc kubenswrapper[4620]: E0129 15:18:09.511376 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/glance-operator@sha256:ebb3f9f6e871da3fdfdefdf4040964abcdc5f4c7dac961a27c85a80f37866f00\\\"\"" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp" podUID="0839a549-c728-40e1-bf59-d8eb6cefc3f2" Jan 29 15:18:10 crc kubenswrapper[4620]: I0129 15:18:10.843381 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert\") pod \"infra-operator-controller-manager-79955696d6-zg6ls\" (UID: \"6ea8203c-2846-46ef-be3b-49596b6edc45\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:18:10 crc kubenswrapper[4620]: I0129 15:18:10.850216 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6ea8203c-2846-46ef-be3b-49596b6edc45-cert\") pod \"infra-operator-controller-manager-79955696d6-zg6ls\" (UID: \"6ea8203c-2846-46ef-be3b-49596b6edc45\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:18:11 crc kubenswrapper[4620]: I0129 15:18:11.066271 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-92wcc" Jan 29 15:18:11 crc kubenswrapper[4620]: I0129 15:18:11.072824 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:18:11 crc kubenswrapper[4620]: I0129 15:18:11.355451 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h\" (UID: \"c3484963-7f9a-47b3-b59d-9390033689b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:18:11 crc kubenswrapper[4620]: I0129 15:18:11.382964 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3484963-7f9a-47b3-b59d-9390033689b6-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h\" (UID: \"c3484963-7f9a-47b3-b59d-9390033689b6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:18:11 crc kubenswrapper[4620]: I0129 15:18:11.410012 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cs82s" Jan 29 15:18:11 crc kubenswrapper[4620]: I0129 15:18:11.419329 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:18:13 crc kubenswrapper[4620]: E0129 15:18:13.322636 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:18:13 crc kubenswrapper[4620]: E0129 15:18:13.445142 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:18:13 crc kubenswrapper[4620]: E0129 15:18:13.445299 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4hll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-drv9l_openshift-marketplace(8be789be-3047-4e86-84ef-9c0345fff20d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:18:13 crc kubenswrapper[4620]: E0129 15:18:13.446418 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:18:18 crc kubenswrapper[4620]: E0129 15:18:18.398298 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 29 15:18:18 crc kubenswrapper[4620]: E0129 15:18:18.398902 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c4m88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-46gn6_openstack-operators(7cf3c71a-0b56-4367-832f-71713e2684c9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:18 crc kubenswrapper[4620]: E0129 15:18:18.400900 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46gn6" podUID="7cf3c71a-0b56-4367-832f-71713e2684c9" Jan 29 15:18:19 crc kubenswrapper[4620]: E0129 15:18:19.045929 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46gn6" podUID="7cf3c71a-0b56-4367-832f-71713e2684c9" Jan 29 15:18:19 crc kubenswrapper[4620]: E0129 15:18:19.875589 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:18:20 crc kubenswrapper[4620]: E0129 15:18:20.189049 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:18:20 crc kubenswrapper[4620]: E0129 15:18:20.199791 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 29 15:18:20 crc kubenswrapper[4620]: E0129 15:18:20.200486 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-79gbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-gl2f9_openstack-operators(18e5dfa7-ffa3-4a45-b4bd-9b3e8f5e0f21): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:20 crc kubenswrapper[4620]: E0129 15:18:20.201947 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9" podUID="18e5dfa7-ffa3-4a45-b4bd-9b3e8f5e0f21" Jan 29 15:18:20 crc kubenswrapper[4620]: E0129 15:18:20.585070 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9" podUID="18e5dfa7-ffa3-4a45-b4bd-9b3e8f5e0f21" Jan 29 15:18:20 crc kubenswrapper[4620]: E0129 15:18:20.888382 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:18:21 crc kubenswrapper[4620]: E0129 15:18:21.868269 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:b0215a60bdcbb8ab35f163ea92a0d50c232e034969cdf47944bbe343671d84a9" Jan 29 15:18:21 crc kubenswrapper[4620]: E0129 15:18:21.868926 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:b0215a60bdcbb8ab35f163ea92a0d50c232e034969cdf47944bbe343671d84a9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lvmhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-dd77988f8-vpfmk_openstack-operators(66c3f803-5b87-4ac9-9673-55cfa299abda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:21 crc kubenswrapper[4620]: E0129 15:18:21.870100 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk" podUID="66c3f803-5b87-4ac9-9673-55cfa299abda" Jan 29 15:18:22 crc kubenswrapper[4620]: E0129 15:18:22.360830 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 29 15:18:22 crc kubenswrapper[4620]: E0129 15:18:22.361015 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7wwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-2ktvb_openstack-operators(b5ef0cec-fba0-46b1-8410-cb3fd8551106): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:22 crc kubenswrapper[4620]: E0129 15:18:22.363101 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb" podUID="b5ef0cec-fba0-46b1-8410-cb3fd8551106" Jan 29 15:18:22 crc kubenswrapper[4620]: E0129 15:18:22.596550 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb" podUID="b5ef0cec-fba0-46b1-8410-cb3fd8551106" Jan 29 15:18:22 crc kubenswrapper[4620]: E0129 15:18:22.596777 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:b0215a60bdcbb8ab35f163ea92a0d50c232e034969cdf47944bbe343671d84a9\\\"\"" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk" podUID="66c3f803-5b87-4ac9-9673-55cfa299abda" Jan 29 15:18:26 crc kubenswrapper[4620]: E0129 15:18:26.481325 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/telemetry-operator@sha256:ee0236c7a8c8383b0a633b6f6e5f31200462ba68a51c45362836014c08c0c976" Jan 29 15:18:26 crc kubenswrapper[4620]: E0129 15:18:26.481797 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:ee0236c7a8c8383b0a633b6f6e5f31200462ba68a51c45362836014c08c0c976,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s9zck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6cf8c44c7-999b7_openstack-operators(6c2400dd-889c-46eb-8d6a-d69e1859135d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:26 crc kubenswrapper[4620]: E0129 15:18:26.483010 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7" podUID="6c2400dd-889c-46eb-8d6a-d69e1859135d" Jan 29 15:18:26 crc kubenswrapper[4620]: E0129 15:18:26.616073 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:ee0236c7a8c8383b0a633b6f6e5f31200462ba68a51c45362836014c08c0c976\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7" podUID="6c2400dd-889c-46eb-8d6a-d69e1859135d" Jan 29 15:18:27 crc kubenswrapper[4620]: E0129 15:18:27.076355 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:18:27 crc kubenswrapper[4620]: E0129 15:18:27.076354 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:18:27 crc kubenswrapper[4620]: E0129 15:18:27.087578 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 29 15:18:27 crc kubenswrapper[4620]: E0129 15:18:27.087778 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7vr4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-rk4qn_openstack-operators(ba4a4cf7-578c-4426-b585-c51a610117dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:27 crc kubenswrapper[4620]: E0129 15:18:27.088911 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn" podUID="ba4a4cf7-578c-4426-b585-c51a610117dc" Jan 29 15:18:27 crc kubenswrapper[4620]: E0129 15:18:27.625294 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn" podUID="ba4a4cf7-578c-4426-b585-c51a610117dc" Jan 29 15:18:27 crc kubenswrapper[4620]: E0129 15:18:27.878355 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/watcher-operator@sha256:d23c69ab5c7d6c649fe9e23db98eae9b9de8dce4f4901511b2b764dd366d7c2c" Jan 29 15:18:27 crc kubenswrapper[4620]: E0129 15:18:27.878516 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:d23c69ab5c7d6c649fe9e23db98eae9b9de8dce4f4901511b2b764dd366d7c2c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qwxrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-59f4c7d7c4-f8vlq_openstack-operators(e2802504-ea02-427f-ab78-79e02a882726): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:27 crc kubenswrapper[4620]: E0129 15:18:27.879954 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq" podUID="e2802504-ea02-427f-ab78-79e02a882726" Jan 29 15:18:28 crc kubenswrapper[4620]: E0129 15:18:28.827560 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:d23c69ab5c7d6c649fe9e23db98eae9b9de8dce4f4901511b2b764dd366d7c2c\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq" podUID="e2802504-ea02-427f-ab78-79e02a882726" Jan 29 15:18:30 crc kubenswrapper[4620]: E0129 15:18:30.544584 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:1567ac98879f64271365fe819b1daeada2e65e56dc713a23e27faeb09e4a8889" Jan 29 15:18:30 crc kubenswrapper[4620]: E0129 15:18:30.544792 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:1567ac98879f64271365fe819b1daeada2e65e56dc713a23e27faeb09e4a8889,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4f2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c7cc6ff45-jx2jt_openstack-operators(fcbc9624-faaa-4663-8392-7684b49a3d93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:30 crc kubenswrapper[4620]: E0129 15:18:30.546786 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt" podUID="fcbc9624-faaa-4663-8392-7684b49a3d93" Jan 29 15:18:30 crc kubenswrapper[4620]: E0129 15:18:30.643124 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:1567ac98879f64271365fe819b1daeada2e65e56dc713a23e27faeb09e4a8889\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt" podUID="fcbc9624-faaa-4663-8392-7684b49a3d93" Jan 29 15:18:30 crc kubenswrapper[4620]: E0129 15:18:30.881422 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:18:31 crc kubenswrapper[4620]: E0129 15:18:31.143273 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8" Jan 29 15:18:31 crc kubenswrapper[4620]: E0129 15:18:31.143539 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttxqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6f7455757b-vx5xt_openstack-operators(e50323a3-871d-41a8-b8b7-488d1fd62e6b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:31 crc kubenswrapper[4620]: E0129 15:18:31.145234 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" podUID="e50323a3-871d-41a8-b8b7-488d1fd62e6b" Jan 29 15:18:32 crc kubenswrapper[4620]: E0129 15:18:32.355437 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:18:32 crc kubenswrapper[4620]: E0129 15:18:32.361683 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 29 15:18:32 crc kubenswrapper[4620]: E0129 15:18:32.361948 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6mbnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-6xsvq_openstack-operators(e788f64b-fae0-41d0-931c-a707ff7b2221): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:32 crc kubenswrapper[4620]: E0129 15:18:32.363151 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" podUID="e788f64b-fae0-41d0-931c-a707ff7b2221" Jan 29 15:18:32 crc kubenswrapper[4620]: I0129 15:18:32.586445 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h"] Jan 29 15:18:35 crc kubenswrapper[4620]: E0129 15:18:35.873808 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:18:36 crc kubenswrapper[4620]: E0129 15:18:36.461769 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:cabd70e99de91d2731cd76d71375b4d51ab37ed1116a8e9464551e19921c7c97" Jan 29 15:18:36 crc kubenswrapper[4620]: E0129 15:18:36.462242 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:cabd70e99de91d2731cd76d71375b4d51ab37ed1116a8e9464551e19921c7c97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5lgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-68cb478976-m69zs_openstack-operators(521acf13-0266-4f13-9744-0f789f922b31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:36 crc kubenswrapper[4620]: E0129 15:18:36.463834 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-68cb478976-m69zs" podUID="521acf13-0266-4f13-9744-0f789f922b31" Jan 29 15:18:36 crc kubenswrapper[4620]: E0129 15:18:36.560816 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.30:5001/openstack-k8s-operators/keystone-operator:160e481fd1397d9f70c1c8e5fdfed4f1d731a193" Jan 29 15:18:36 crc kubenswrapper[4620]: E0129 15:18:36.561471 4620 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.30:5001/openstack-k8s-operators/keystone-operator:160e481fd1397d9f70c1c8e5fdfed4f1d731a193" Jan 29 15:18:36 crc kubenswrapper[4620]: E0129 15:18:36.561720 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.30:5001/openstack-k8s-operators/keystone-operator:160e481fd1397d9f70c1c8e5fdfed4f1d731a193,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwwdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7f9d69db65-fpcjf_openstack-operators(379029bd-b764-49a9-b9d0-cdf5d69e2276): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:18:36 crc kubenswrapper[4620]: E0129 15:18:36.563210 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf" podUID="379029bd-b764-49a9-b9d0-cdf5d69e2276" Jan 29 15:18:36 crc kubenswrapper[4620]: I0129 15:18:36.695670 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" event={"ID":"c3484963-7f9a-47b3-b59d-9390033689b6","Type":"ContainerStarted","Data":"4d92ee50b6ff6b71419c6a53325d82841b5dbb7e356f2e525344e6ffb4186ed3"} Jan 29 15:18:36 crc kubenswrapper[4620]: E0129 15:18:36.700886 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:cabd70e99de91d2731cd76d71375b4d51ab37ed1116a8e9464551e19921c7c97\\\"\"" pod="openstack-operators/nova-operator-controller-manager-68cb478976-m69zs" podUID="521acf13-0266-4f13-9744-0f789f922b31" Jan 29 15:18:36 crc kubenswrapper[4620]: E0129 15:18:36.700908 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.30:5001/openstack-k8s-operators/keystone-operator:160e481fd1397d9f70c1c8e5fdfed4f1d731a193\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf" podUID="379029bd-b764-49a9-b9d0-cdf5d69e2276" Jan 29 15:18:36 crc kubenswrapper[4620]: I0129 15:18:36.818532 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls"] Jan 29 15:18:36 crc kubenswrapper[4620]: W0129 15:18:36.827867 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ea8203c_2846_46ef_be3b_49596b6edc45.slice/crio-5ac6e7f9d3dcdf43cc2795896bb298e6097630bf2ac707c1c2697126ccfbca01 WatchSource:0}: Error finding container 5ac6e7f9d3dcdf43cc2795896bb298e6097630bf2ac707c1c2697126ccfbca01: Status 404 returned error can't find the container with id 5ac6e7f9d3dcdf43cc2795896bb298e6097630bf2ac707c1c2697126ccfbca01 Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.191285 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp"] Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.722377 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk" event={"ID":"66c3f803-5b87-4ac9-9673-55cfa299abda","Type":"ContainerStarted","Data":"16a62a89ee0eb3d67c4c81bdf9a829460a46f213748a5609a22defe8eb7269e2"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.723299 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.731960 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx" event={"ID":"cf77b77d-178f-45cc-854f-5ae0438eac47","Type":"ContainerStarted","Data":"2e32cbfa41dce29c0bed99320b6e3652a34dba65190477082ecbd93ae6eebbea"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.732747 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.755180 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9" event={"ID":"18e5dfa7-ffa3-4a45-b4bd-9b3e8f5e0f21","Type":"ContainerStarted","Data":"bd83a3dd5b8abb19c3537a3b3757921365de453901d56c7dcb222eee964817ec"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.755736 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.756924 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk" podStartSLOduration=4.420106817 podStartE2EDuration="59.756913499s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:17:41.521132374 +0000 UTC m=+1002.133960019" lastFinishedPulling="2026-01-29 15:18:36.857939056 +0000 UTC m=+1057.470766701" observedRunningTime="2026-01-29 15:18:37.749693634 +0000 UTC m=+1058.362521279" watchObservedRunningTime="2026-01-29 15:18:37.756913499 +0000 UTC m=+1058.369741134" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.758962 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-mhkm4" event={"ID":"c0a5289e-e06f-456b-9dd9-ab08f6f3e2f7","Type":"ContainerStarted","Data":"6b41310316ccc48d08f9dc084695b14be9214dcb7246b49edb5217d7d04a30d9"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.759430 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-mhkm4" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.772258 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" event={"ID":"e0487d66-d615-4562-b241-9d1424693de8","Type":"ContainerStarted","Data":"8677a2b12f36febed8efa47b447efc33e225f765705cc034aa38e72c62ff5494"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.772299 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" event={"ID":"e0487d66-d615-4562-b241-9d1424693de8","Type":"ContainerStarted","Data":"1ad0d550c84375c1bacfc44d1f0f2cdf53ef8a3e4c943d65e5a2e0bbb98f3fcf"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.772530 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.782835 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx" podStartSLOduration=4.248274447 podStartE2EDuration="59.78281983s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:17:41.029390378 +0000 UTC m=+1001.642218023" lastFinishedPulling="2026-01-29 15:18:36.563935761 +0000 UTC m=+1057.176763406" observedRunningTime="2026-01-29 15:18:37.781233922 +0000 UTC m=+1058.394061587" watchObservedRunningTime="2026-01-29 15:18:37.78281983 +0000 UTC m=+1058.395647475" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.788453 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-76c896469f-zjnc9" event={"ID":"04f845cd-06d1-47cf-975c-5ad809f0734a","Type":"ContainerStarted","Data":"b3d7230d1ea98ce575e3b2e6d6049007a421d9e063a24df6b796368451deff7f"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.790085 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-76c896469f-zjnc9" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.800078 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp" event={"ID":"0839a549-c728-40e1-bf59-d8eb6cefc3f2","Type":"ContainerStarted","Data":"ec8daaf9e96b622ca75909098552d6bc70345838e223915d99fee91131010834"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.800872 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.807454 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm" event={"ID":"209e5c93-4fcb-45d3-8a50-25bfc9f954bd","Type":"ContainerStarted","Data":"1be1904ab061b9f2723bbcb25fbe6b003ac75e337ae77733720f71d3dee3fade"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.808283 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.829006 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d6fdb96dc-5r2sp" event={"ID":"eba5d75b-67b6-45b6-99a6-508fbbcd6fbc","Type":"ContainerStarted","Data":"19370a9f8e8895840fda38953f6547ba72bbcb4ac7ddd5a1e61afae264cef5e4"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.829773 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d6fdb96dc-5r2sp" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.839901 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" event={"ID":"6ea8203c-2846-46ef-be3b-49596b6edc45","Type":"ContainerStarted","Data":"5ac6e7f9d3dcdf43cc2795896bb298e6097630bf2ac707c1c2697126ccfbca01"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.849318 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt" event={"ID":"1fa9c6ba-6bfe-4c67-b27d-6697d9bc540d","Type":"ContainerStarted","Data":"6229e3e22a781818ab8e08301828adbe5abfd5538c19b28b0b0a3ea8d9461832"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.849992 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.869573 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-mhkm4" podStartSLOduration=9.927975534 podStartE2EDuration="59.86955778s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:17:41.18908187 +0000 UTC m=+1001.801909515" lastFinishedPulling="2026-01-29 15:18:31.130664126 +0000 UTC m=+1051.743491761" observedRunningTime="2026-01-29 15:18:37.813903334 +0000 UTC m=+1058.426730989" watchObservedRunningTime="2026-01-29 15:18:37.86955778 +0000 UTC m=+1058.482385425" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.888709 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46gn6" event={"ID":"7cf3c71a-0b56-4367-832f-71713e2684c9","Type":"ContainerStarted","Data":"5fa84798bf22e6efcd03270e14a5835cc7919edafbf28c8f903780efcacc2e98"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.896138 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw" event={"ID":"eb53cd1f-b0ca-41b1-a611-b71cc5ba71d5","Type":"ContainerStarted","Data":"6001ecb1bbdfeb9b0788db682c9382ea7cbf98bb66858837a0691e902363e3ee"} Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.896380 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.901987 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" podStartSLOduration=58.901973504 podStartE2EDuration="58.901973504s" podCreationTimestamp="2026-01-29 15:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:18:37.875698843 +0000 UTC m=+1058.488526488" watchObservedRunningTime="2026-01-29 15:18:37.901973504 +0000 UTC m=+1058.514801149" Jan 29 15:18:37 crc kubenswrapper[4620]: I0129 15:18:37.903563 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9" podStartSLOduration=4.6312295599999995 podStartE2EDuration="59.903555911s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:17:41.448896209 +0000 UTC m=+1002.061723854" lastFinishedPulling="2026-01-29 15:18:36.72122256 +0000 UTC m=+1057.334050205" observedRunningTime="2026-01-29 15:18:37.901567121 +0000 UTC m=+1058.514394766" watchObservedRunningTime="2026-01-29 15:18:37.903555911 +0000 UTC m=+1058.516383556" Jan 29 15:18:38 crc kubenswrapper[4620]: I0129 15:18:38.015498 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d6fdb96dc-5r2sp" podStartSLOduration=4.117747551 podStartE2EDuration="1m0.015475351s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:17:40.534969027 +0000 UTC m=+1001.147796672" lastFinishedPulling="2026-01-29 15:18:36.432696827 +0000 UTC m=+1057.045524472" observedRunningTime="2026-01-29 15:18:38.013171031 +0000 UTC m=+1058.625998676" watchObservedRunningTime="2026-01-29 15:18:38.015475351 +0000 UTC m=+1058.628302996" Jan 29 15:18:38 crc kubenswrapper[4620]: I0129 15:18:38.019970 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt" podStartSLOduration=5.647038553 podStartE2EDuration="1m0.019955463s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:17:42.182026403 +0000 UTC m=+1002.794854058" lastFinishedPulling="2026-01-29 15:18:36.554943323 +0000 UTC m=+1057.167770968" observedRunningTime="2026-01-29 15:18:37.969594385 +0000 UTC m=+1058.582422030" watchObservedRunningTime="2026-01-29 15:18:38.019955463 +0000 UTC m=+1058.632783108" Jan 29 15:18:38 crc kubenswrapper[4620]: I0129 15:18:38.041978 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-46gn6" podStartSLOduration=4.715551814 podStartE2EDuration="59.041960858s" podCreationTimestamp="2026-01-29 15:17:39 +0000 UTC" firstStartedPulling="2026-01-29 15:17:42.236451855 +0000 UTC m=+1002.849279500" lastFinishedPulling="2026-01-29 15:18:36.562860899 +0000 UTC m=+1057.175688544" observedRunningTime="2026-01-29 15:18:38.037094333 +0000 UTC m=+1058.649921978" watchObservedRunningTime="2026-01-29 15:18:38.041960858 +0000 UTC m=+1058.654788503" Jan 29 15:18:38 crc kubenswrapper[4620]: I0129 15:18:38.068628 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw" podStartSLOduration=4.628339781 podStartE2EDuration="1m0.068614281s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:17:41.124140115 +0000 UTC m=+1001.736967760" lastFinishedPulling="2026-01-29 15:18:36.564414615 +0000 UTC m=+1057.177242260" observedRunningTime="2026-01-29 15:18:38.067418595 +0000 UTC m=+1058.680246260" watchObservedRunningTime="2026-01-29 15:18:38.068614281 +0000 UTC m=+1058.681441926" Jan 29 15:18:38 crc kubenswrapper[4620]: I0129 15:18:38.128945 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-76c896469f-zjnc9" podStartSLOduration=9.599281849 podStartE2EDuration="1m0.128927295s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:17:41.847860181 +0000 UTC m=+1002.460687816" lastFinishedPulling="2026-01-29 15:18:32.377505617 +0000 UTC m=+1052.990333262" observedRunningTime="2026-01-29 15:18:38.101690775 +0000 UTC m=+1058.714518430" watchObservedRunningTime="2026-01-29 15:18:38.128927295 +0000 UTC m=+1058.741754940" Jan 29 15:18:38 crc kubenswrapper[4620]: I0129 15:18:38.164866 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm" podStartSLOduration=4.619687776 podStartE2EDuration="59.164845473s" podCreationTimestamp="2026-01-29 15:17:39 +0000 UTC" firstStartedPulling="2026-01-29 15:17:42.022393022 +0000 UTC m=+1002.635220667" lastFinishedPulling="2026-01-29 15:18:36.567550719 +0000 UTC m=+1057.180378364" observedRunningTime="2026-01-29 15:18:38.130207443 +0000 UTC m=+1058.743035088" watchObservedRunningTime="2026-01-29 15:18:38.164845473 +0000 UTC m=+1058.777673118" Jan 29 15:18:38 crc kubenswrapper[4620]: I0129 15:18:38.907215 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb" event={"ID":"b5ef0cec-fba0-46b1-8410-cb3fd8551106","Type":"ContainerStarted","Data":"96b74199a093bafa8dec2fe57c9389259a118fe132743253ca99f948b3b33d92"} Jan 29 15:18:38 crc kubenswrapper[4620]: I0129 15:18:38.907443 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb" Jan 29 15:18:38 crc kubenswrapper[4620]: I0129 15:18:38.910272 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn" event={"ID":"ba4a4cf7-578c-4426-b585-c51a610117dc","Type":"ContainerStarted","Data":"f1194c49c37e86eca4839c879e396ba55c6681763f00a46c76b9d14aed312adb"} Jan 29 15:18:38 crc kubenswrapper[4620]: I0129 15:18:38.927326 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb" podStartSLOduration=5.348756488 podStartE2EDuration="1m0.927310445s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:17:41.795099181 +0000 UTC m=+1002.407926826" lastFinishedPulling="2026-01-29 15:18:37.373653138 +0000 UTC m=+1057.986480783" observedRunningTime="2026-01-29 15:18:38.923683026 +0000 UTC m=+1059.536510671" watchObservedRunningTime="2026-01-29 15:18:38.927310445 +0000 UTC m=+1059.540138100" Jan 29 15:18:38 crc kubenswrapper[4620]: I0129 15:18:38.927605 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp" podStartSLOduration=5.864869917 podStartE2EDuration="1m0.927601403s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:17:41.501632348 +0000 UTC m=+1002.114459993" lastFinishedPulling="2026-01-29 15:18:36.564363834 +0000 UTC m=+1057.177191479" observedRunningTime="2026-01-29 15:18:38.17851027 +0000 UTC m=+1058.791337925" watchObservedRunningTime="2026-01-29 15:18:38.927601403 +0000 UTC m=+1059.540429048" Jan 29 15:18:38 crc kubenswrapper[4620]: I0129 15:18:38.951662 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn" podStartSLOduration=3.365575009 podStartE2EDuration="59.951645468s" podCreationTimestamp="2026-01-29 15:17:39 +0000 UTC" firstStartedPulling="2026-01-29 15:17:42.010248659 +0000 UTC m=+1002.623076304" lastFinishedPulling="2026-01-29 15:18:38.596319118 +0000 UTC m=+1059.209146763" observedRunningTime="2026-01-29 15:18:38.946080563 +0000 UTC m=+1059.558908208" watchObservedRunningTime="2026-01-29 15:18:38.951645468 +0000 UTC m=+1059.564473113" Jan 29 15:18:39 crc kubenswrapper[4620]: E0129 15:18:39.876199 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:18:39 crc kubenswrapper[4620]: I0129 15:18:39.929424 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn" Jan 29 15:18:41 crc kubenswrapper[4620]: E0129 15:18:41.931990 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:18:42 crc kubenswrapper[4620]: E0129 15:18:42.873977 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:18:42 crc kubenswrapper[4620]: I0129 15:18:42.955479 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" event={"ID":"6ea8203c-2846-46ef-be3b-49596b6edc45","Type":"ContainerStarted","Data":"93cb2d1e07d9d4e266433f5cc0b3b6e90aef4b5d2d8ea4e602eee1c903efb4ec"} Jan 29 15:18:42 crc kubenswrapper[4620]: I0129 15:18:42.955611 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:18:42 crc kubenswrapper[4620]: I0129 15:18:42.956490 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq" event={"ID":"e2802504-ea02-427f-ab78-79e02a882726","Type":"ContainerStarted","Data":"a70b90da0d649e5f84ebd9a3f8398dd4f7b1b3e0328590e1627028ba3c8a27fa"} Jan 29 15:18:42 crc kubenswrapper[4620]: I0129 15:18:42.956627 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq" Jan 29 15:18:42 crc kubenswrapper[4620]: I0129 15:18:42.957638 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt" event={"ID":"fcbc9624-faaa-4663-8392-7684b49a3d93","Type":"ContainerStarted","Data":"026bf29e5a07f99a84c8cd0858f68216aa8507ac657051f16611d65734cc09d6"} Jan 29 15:18:42 crc kubenswrapper[4620]: I0129 15:18:42.957767 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt" Jan 29 15:18:42 crc kubenswrapper[4620]: I0129 15:18:42.958628 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" event={"ID":"c3484963-7f9a-47b3-b59d-9390033689b6","Type":"ContainerStarted","Data":"cb0717ceb71f23f99a76ac4491d3c9bb74160135d5039b800728182a367a1fb6"} Jan 29 15:18:42 crc kubenswrapper[4620]: I0129 15:18:42.958786 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:18:42 crc kubenswrapper[4620]: I0129 15:18:42.960403 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7" event={"ID":"6c2400dd-889c-46eb-8d6a-d69e1859135d","Type":"ContainerStarted","Data":"d0567fcc04950f9b0a1b20ec2c379f876e6bd8279c595e3917879450cf2a083a"} Jan 29 15:18:42 crc kubenswrapper[4620]: I0129 15:18:42.960817 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7" Jan 29 15:18:43 crc kubenswrapper[4620]: I0129 15:18:43.005328 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" podStartSLOduration=59.865764586 podStartE2EDuration="1m5.005308943s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:18:36.837830508 +0000 UTC m=+1057.450658153" lastFinishedPulling="2026-01-29 15:18:41.977374865 +0000 UTC m=+1062.590202510" observedRunningTime="2026-01-29 15:18:42.9793028 +0000 UTC m=+1063.592130455" watchObservedRunningTime="2026-01-29 15:18:43.005308943 +0000 UTC m=+1063.618136578" Jan 29 15:18:43 crc kubenswrapper[4620]: I0129 15:18:43.048896 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" podStartSLOduration=58.645714871 podStartE2EDuration="1m4.04887646s" podCreationTimestamp="2026-01-29 15:17:39 +0000 UTC" firstStartedPulling="2026-01-29 15:18:36.562790207 +0000 UTC m=+1057.175617852" lastFinishedPulling="2026-01-29 15:18:41.965951796 +0000 UTC m=+1062.578779441" observedRunningTime="2026-01-29 15:18:43.042911782 +0000 UTC m=+1063.655739447" watchObservedRunningTime="2026-01-29 15:18:43.04887646 +0000 UTC m=+1063.661704105" Jan 29 15:18:43 crc kubenswrapper[4620]: I0129 15:18:43.050916 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7" podStartSLOduration=4.249227123 podStartE2EDuration="1m4.05090719s" podCreationTimestamp="2026-01-29 15:17:39 +0000 UTC" firstStartedPulling="2026-01-29 15:17:42.162855646 +0000 UTC m=+1002.775683291" lastFinishedPulling="2026-01-29 15:18:41.964535713 +0000 UTC m=+1062.577363358" observedRunningTime="2026-01-29 15:18:43.005773187 +0000 UTC m=+1063.618600842" watchObservedRunningTime="2026-01-29 15:18:43.05090719 +0000 UTC m=+1063.663734835" Jan 29 15:18:43 crc kubenswrapper[4620]: I0129 15:18:43.087822 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq" podStartSLOduration=4.157992279 podStartE2EDuration="1m4.087806408s" podCreationTimestamp="2026-01-29 15:17:39 +0000 UTC" firstStartedPulling="2026-01-29 15:17:42.040437234 +0000 UTC m=+1002.653264879" lastFinishedPulling="2026-01-29 15:18:41.970251363 +0000 UTC m=+1062.583079008" observedRunningTime="2026-01-29 15:18:43.083817319 +0000 UTC m=+1063.696644974" watchObservedRunningTime="2026-01-29 15:18:43.087806408 +0000 UTC m=+1063.700634053" Jan 29 15:18:43 crc kubenswrapper[4620]: I0129 15:18:43.131617 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt" podStartSLOduration=4.98222608 podStartE2EDuration="1m5.13160161s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:17:42.163160486 +0000 UTC m=+1002.775988131" lastFinishedPulling="2026-01-29 15:18:42.312536006 +0000 UTC m=+1062.925363661" observedRunningTime="2026-01-29 15:18:43.125742416 +0000 UTC m=+1063.738570061" watchObservedRunningTime="2026-01-29 15:18:43.13160161 +0000 UTC m=+1063.744429255" Jan 29 15:18:44 crc kubenswrapper[4620]: E0129 15:18:44.875711 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:18:44 crc kubenswrapper[4620]: E0129 15:18:44.875890 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" podUID="e788f64b-fae0-41d0-931c-a707ff7b2221" Jan 29 15:18:45 crc kubenswrapper[4620]: I0129 15:18:45.697737 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-65db95fbc9-lzmhp" Jan 29 15:18:45 crc kubenswrapper[4620]: E0129 15:18:45.873440 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" podUID="e50323a3-871d-41a8-b8b7-488d1fd62e6b" Jan 29 15:18:48 crc kubenswrapper[4620]: I0129 15:18:48.948243 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d6fdb96dc-5r2sp" Jan 29 15:18:48 crc kubenswrapper[4620]: I0129 15:18:48.984899 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-mjknx" Jan 29 15:18:49 crc kubenswrapper[4620]: I0129 15:18:49.197401 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-wdbkp" Jan 29 15:18:49 crc kubenswrapper[4620]: I0129 15:18:49.199013 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-mhkm4" Jan 29 15:18:49 crc kubenswrapper[4620]: I0129 15:18:49.206039 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-gl2f9" Jan 29 15:18:49 crc kubenswrapper[4620]: I0129 15:18:49.280963 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-nvhfw" Jan 29 15:18:49 crc kubenswrapper[4620]: I0129 15:18:49.361191 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-vpfmk" Jan 29 15:18:49 crc kubenswrapper[4620]: I0129 15:18:49.476039 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-76c896469f-zjnc9" Jan 29 15:18:49 crc kubenswrapper[4620]: I0129 15:18:49.520904 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2ktvb" Jan 29 15:18:49 crc kubenswrapper[4620]: I0129 15:18:49.555288 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-jx2jt" Jan 29 15:18:49 crc kubenswrapper[4620]: I0129 15:18:49.599565 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-lwdxt" Jan 29 15:18:49 crc kubenswrapper[4620]: I0129 15:18:49.722310 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-7gtnm" Jan 29 15:18:49 crc kubenswrapper[4620]: E0129 15:18:49.873854 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:18:49 crc kubenswrapper[4620]: I0129 15:18:49.930038 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rk4qn" Jan 29 15:18:49 crc kubenswrapper[4620]: I0129 15:18:49.946131 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-f8vlq" Jan 29 15:18:50 crc kubenswrapper[4620]: I0129 15:18:50.647289 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-999b7" Jan 29 15:18:51 crc kubenswrapper[4620]: I0129 15:18:51.078879 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-zg6ls" Jan 29 15:18:51 crc kubenswrapper[4620]: I0129 15:18:51.425180 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h" Jan 29 15:18:52 crc kubenswrapper[4620]: I0129 15:18:52.013642 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-68cb478976-m69zs" event={"ID":"521acf13-0266-4f13-9744-0f789f922b31","Type":"ContainerStarted","Data":"e25ff15ca46557eb56b1390a94e61c19b80d6e6949714418bfe3b964db52fc3e"} Jan 29 15:18:52 crc kubenswrapper[4620]: I0129 15:18:52.013898 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-68cb478976-m69zs" Jan 29 15:18:52 crc kubenswrapper[4620]: I0129 15:18:52.016029 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf" event={"ID":"379029bd-b764-49a9-b9d0-cdf5d69e2276","Type":"ContainerStarted","Data":"95392ffe6921918a4a46125edd654417d4f82130b88ce40dea9620e6092bbfe6"} Jan 29 15:18:52 crc kubenswrapper[4620]: I0129 15:18:52.016327 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf" Jan 29 15:18:52 crc kubenswrapper[4620]: I0129 15:18:52.028081 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-68cb478976-m69zs" podStartSLOduration=4.400546012 podStartE2EDuration="1m14.028062205s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:17:41.81082454 +0000 UTC m=+1002.423652185" lastFinishedPulling="2026-01-29 15:18:51.438340733 +0000 UTC m=+1072.051168378" observedRunningTime="2026-01-29 15:18:52.02754268 +0000 UTC m=+1072.640370335" watchObservedRunningTime="2026-01-29 15:18:52.028062205 +0000 UTC m=+1072.640889850" Jan 29 15:18:52 crc kubenswrapper[4620]: I0129 15:18:52.051566 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf" podStartSLOduration=4.868114036 podStartE2EDuration="1m14.051551464s" podCreationTimestamp="2026-01-29 15:17:38 +0000 UTC" firstStartedPulling="2026-01-29 15:17:41.808240378 +0000 UTC m=+1002.421068023" lastFinishedPulling="2026-01-29 15:18:50.991677806 +0000 UTC m=+1071.604505451" observedRunningTime="2026-01-29 15:18:52.047597016 +0000 UTC m=+1072.660424681" watchObservedRunningTime="2026-01-29 15:18:52.051551464 +0000 UTC m=+1072.664379109" Jan 29 15:18:52 crc kubenswrapper[4620]: E0129 15:18:52.877470 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:18:54 crc kubenswrapper[4620]: E0129 15:18:54.874947 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:18:57 crc kubenswrapper[4620]: E0129 15:18:57.013476 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 29 15:18:57 crc kubenswrapper[4620]: E0129 15:18:57.013957 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:cp-frr-files,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/bin/sh -c cp -rLf /tmp/frr/* /etc/frr/],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:frr-startup,ReadOnly:false,MountPath:/tmp/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:frr-conf,ReadOnly:false,MountPath:/etc/frr,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7md7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*100,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*101,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-h6cmz_metallb-system(e6981bff-0689-47b1-96f6-f265340449f9): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:18:57 crc kubenswrapper[4620]: E0129 15:18:57.015918 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:18:59 crc kubenswrapper[4620]: I0129 15:18:59.057498 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" event={"ID":"e50323a3-871d-41a8-b8b7-488d1fd62e6b","Type":"ContainerStarted","Data":"0d6bf031bd0fb8b794bdf25f790d60a76168a4c77fb6f0041a917e75801ee478"} Jan 29 15:18:59 crc kubenswrapper[4620]: I0129 15:18:59.058134 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" Jan 29 15:18:59 crc kubenswrapper[4620]: I0129 15:18:59.082309 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" podStartSLOduration=4.079498807 podStartE2EDuration="1m20.082292879s" podCreationTimestamp="2026-01-29 15:17:39 +0000 UTC" firstStartedPulling="2026-01-29 15:17:42.376424653 +0000 UTC m=+1002.989252298" lastFinishedPulling="2026-01-29 15:18:58.379218705 +0000 UTC m=+1078.992046370" observedRunningTime="2026-01-29 15:18:59.079592959 +0000 UTC m=+1079.692420604" watchObservedRunningTime="2026-01-29 15:18:59.082292879 +0000 UTC m=+1079.695120514" Jan 29 15:18:59 crc kubenswrapper[4620]: I0129 15:18:59.427481 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7f9d69db65-fpcjf" Jan 29 15:18:59 crc kubenswrapper[4620]: I0129 15:18:59.593535 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-68cb478976-m69zs" Jan 29 15:18:59 crc kubenswrapper[4620]: E0129 15:18:59.875044 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:19:02 crc kubenswrapper[4620]: E0129 15:19:02.875238 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:19:03 crc kubenswrapper[4620]: I0129 15:19:03.082047 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" event={"ID":"e788f64b-fae0-41d0-931c-a707ff7b2221","Type":"ContainerStarted","Data":"468bc0d0709eaf92634a2432db19564f56fe80fbd2efd51778b5046c7727b9b8"} Jan 29 15:19:03 crc kubenswrapper[4620]: I0129 15:19:03.082312 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" Jan 29 15:19:03 crc kubenswrapper[4620]: I0129 15:19:03.098843 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" podStartSLOduration=4.096038242 podStartE2EDuration="1m24.098825539s" podCreationTimestamp="2026-01-29 15:17:39 +0000 UTC" firstStartedPulling="2026-01-29 15:17:42.182028273 +0000 UTC m=+1002.794855918" lastFinishedPulling="2026-01-29 15:19:02.18481557 +0000 UTC m=+1082.797643215" observedRunningTime="2026-01-29 15:19:03.097183411 +0000 UTC m=+1083.710011076" watchObservedRunningTime="2026-01-29 15:19:03.098825539 +0000 UTC m=+1083.711653184" Jan 29 15:19:04 crc kubenswrapper[4620]: E0129 15:19:04.006047 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:19:04 crc kubenswrapper[4620]: E0129 15:19:04.006184 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrq6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ql7jk_openshift-marketplace(4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:19:04 crc kubenswrapper[4620]: E0129 15:19:04.007256 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:19:07 crc kubenswrapper[4620]: E0129 15:19:07.873983 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:19:09 crc kubenswrapper[4620]: I0129 15:19:09.725065 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-vx5xt" Jan 29 15:19:09 crc kubenswrapper[4620]: E0129 15:19:09.874073 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:19:09 crc kubenswrapper[4620]: I0129 15:19:09.920823 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-6xsvq" Jan 29 15:19:12 crc kubenswrapper[4620]: E0129 15:19:12.874213 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:19:17 crc kubenswrapper[4620]: E0129 15:19:17.874218 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:19:18 crc kubenswrapper[4620]: E0129 15:19:18.002013 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862" Jan 29 15:19:18 crc kubenswrapper[4620]: E0129 15:19:18.002174 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:frr-k8s-webhook-server,Image:registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862,Command:[/frr-k8s],Args:[--log-level=debug --webhook-mode=onlywebhook --disable-cert-rotation=true --namespace=$(NAMESPACE) --metrics-bind-address=:7572],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:monitoring,HostPort:0,ContainerPort:7572,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pjjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000700000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod frr-k8s-webhook-server-7df86c4f6c-klnxt_metallb-system(fd14327c-f27e-4e0b-9955-a9cc76e1c253): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:19:18 crc kubenswrapper[4620]: E0129 15:19:18.003336 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:19:19 crc kubenswrapper[4620]: E0129 15:19:19.876108 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:19:21 crc kubenswrapper[4620]: E0129 15:19:21.875089 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:19:26 crc kubenswrapper[4620]: E0129 15:19:26.010060 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:19:26 crc kubenswrapper[4620]: E0129 15:19:26.010236 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvbcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sg6jb_openshift-marketplace(83c6763d-3a90-4ded-b50c-57eb36ad1c0d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:19:26 crc kubenswrapper[4620]: E0129 15:19:26.011456 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:19:30 crc kubenswrapper[4620]: E0129 15:19:30.878419 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:19:31 crc kubenswrapper[4620]: E0129 15:19:31.874586 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:19:33 crc kubenswrapper[4620]: E0129 15:19:33.874045 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:19:33 crc kubenswrapper[4620]: E0129 15:19:33.992586 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:19:33 crc kubenswrapper[4620]: E0129 15:19:33.992729 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4hll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-drv9l_openshift-marketplace(8be789be-3047-4e86-84ef-9c0345fff20d): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:19:33 crc kubenswrapper[4620]: E0129 15:19:33.994040 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:19:37 crc kubenswrapper[4620]: E0129 15:19:37.873464 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:19:41 crc kubenswrapper[4620]: E0129 15:19:41.875197 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:19:43 crc kubenswrapper[4620]: E0129 15:19:43.874741 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:19:44 crc kubenswrapper[4620]: E0129 15:19:44.874191 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:19:48 crc kubenswrapper[4620]: E0129 15:19:48.874321 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:19:51 crc kubenswrapper[4620]: E0129 15:19:51.875493 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:19:54 crc kubenswrapper[4620]: E0129 15:19:54.873983 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:19:55 crc kubenswrapper[4620]: E0129 15:19:55.873706 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:19:57 crc kubenswrapper[4620]: E0129 15:19:57.873877 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:20:02 crc kubenswrapper[4620]: E0129 15:20:02.874536 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:20:02 crc kubenswrapper[4620]: E0129 15:20:02.874859 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:20:04 crc kubenswrapper[4620]: I0129 15:20:04.111390 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:20:04 crc kubenswrapper[4620]: I0129 15:20:04.112223 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:20:06 crc kubenswrapper[4620]: E0129 15:20:06.874845 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:20:07 crc kubenswrapper[4620]: E0129 15:20:07.874214 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:20:11 crc kubenswrapper[4620]: E0129 15:20:11.874889 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:20:14 crc kubenswrapper[4620]: I0129 15:20:14.040983 4620 scope.go:117] "RemoveContainer" containerID="451fd7faacc40e18469f5278d07b8a4c2ca9580e3679c9442d1ab4dd55e7ac8d" Jan 29 15:20:14 crc kubenswrapper[4620]: I0129 15:20:14.065306 4620 scope.go:117] "RemoveContainer" containerID="aa332fa2c313e910a813221f9debbfa1cc0baafe8c2406f32dc34b495bb22f73" Jan 29 15:20:15 crc kubenswrapper[4620]: E0129 15:20:15.878392 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:20:16 crc kubenswrapper[4620]: E0129 15:20:16.874078 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:20:21 crc kubenswrapper[4620]: E0129 15:20:21.873827 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:20:21 crc kubenswrapper[4620]: E0129 15:20:21.874183 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:20:22 crc kubenswrapper[4620]: E0129 15:20:22.873380 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:20:26 crc kubenswrapper[4620]: E0129 15:20:26.875212 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:20:28 crc kubenswrapper[4620]: E0129 15:20:28.874467 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:20:32 crc kubenswrapper[4620]: E0129 15:20:32.876019 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:20:33 crc kubenswrapper[4620]: E0129 15:20:33.874484 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:20:34 crc kubenswrapper[4620]: I0129 15:20:34.111289 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:20:34 crc kubenswrapper[4620]: I0129 15:20:34.111382 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:20:37 crc kubenswrapper[4620]: E0129 15:20:37.874889 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:20:38 crc kubenswrapper[4620]: E0129 15:20:38.877179 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:20:43 crc kubenswrapper[4620]: E0129 15:20:43.874498 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:20:44 crc kubenswrapper[4620]: E0129 15:20:44.874226 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:20:46 crc kubenswrapper[4620]: E0129 15:20:46.874503 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:20:51 crc kubenswrapper[4620]: E0129 15:20:51.873621 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:20:52 crc kubenswrapper[4620]: E0129 15:20:52.873735 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:20:54 crc kubenswrapper[4620]: E0129 15:20:54.873613 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:20:56 crc kubenswrapper[4620]: E0129 15:20:56.875118 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:21:01 crc kubenswrapper[4620]: E0129 15:21:01.876040 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:21:03 crc kubenswrapper[4620]: E0129 15:21:03.873703 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:21:04 crc kubenswrapper[4620]: I0129 15:21:04.110843 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:21:04 crc kubenswrapper[4620]: I0129 15:21:04.110944 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:21:04 crc kubenswrapper[4620]: I0129 15:21:04.111040 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:21:04 crc kubenswrapper[4620]: I0129 15:21:04.111796 4620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"871cbbae8f526a267583740516b31e7ceb7222ae119d96420315de0c8548a400"} pod="openshift-machine-config-operator/machine-config-daemon-7469t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:21:04 crc kubenswrapper[4620]: I0129 15:21:04.111875 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" containerID="cri-o://871cbbae8f526a267583740516b31e7ceb7222ae119d96420315de0c8548a400" gracePeriod=600 Jan 29 15:21:04 crc kubenswrapper[4620]: E0129 15:21:04.874737 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:21:05 crc kubenswrapper[4620]: I0129 15:21:05.181641 4620 generic.go:334] "Generic (PLEG): container finished" podID="a76cce43-3d01-4158-b23a-e21fd5927792" containerID="871cbbae8f526a267583740516b31e7ceb7222ae119d96420315de0c8548a400" exitCode=0 Jan 29 15:21:05 crc kubenswrapper[4620]: I0129 15:21:05.181679 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerDied","Data":"871cbbae8f526a267583740516b31e7ceb7222ae119d96420315de0c8548a400"} Jan 29 15:21:05 crc kubenswrapper[4620]: I0129 15:21:05.181986 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"167b6c8fefd91c16d6187c62bc6ae99ac84e11ac864df648cfd287babfd6eed6"} Jan 29 15:21:05 crc kubenswrapper[4620]: I0129 15:21:05.182001 4620 scope.go:117] "RemoveContainer" containerID="530d4b8825c86a1d226272eaddfd8776e92faf5ad624ab12e26dd0d4fe879bf7" Jan 29 15:21:06 crc kubenswrapper[4620]: E0129 15:21:06.874113 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:21:09 crc kubenswrapper[4620]: E0129 15:21:09.874606 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:21:14 crc kubenswrapper[4620]: I0129 15:21:14.107714 4620 scope.go:117] "RemoveContainer" containerID="11d3c468a4817d134a731352102a144d5be9555779bf0476f36536fb5b334bc1" Jan 29 15:21:16 crc kubenswrapper[4620]: E0129 15:21:16.873425 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:21:18 crc kubenswrapper[4620]: E0129 15:21:18.874974 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:21:19 crc kubenswrapper[4620]: E0129 15:21:19.875560 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:21:19 crc kubenswrapper[4620]: E0129 15:21:19.875664 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:21:23 crc kubenswrapper[4620]: E0129 15:21:23.874882 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:21:27 crc kubenswrapper[4620]: E0129 15:21:27.874626 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:21:30 crc kubenswrapper[4620]: E0129 15:21:30.882846 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:21:33 crc kubenswrapper[4620]: E0129 15:21:33.874341 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cp-frr-files\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-h6cmz" podUID="e6981bff-0689-47b1-96f6-f265340449f9" Jan 29 15:21:34 crc kubenswrapper[4620]: E0129 15:21:34.874978 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:21:36 crc kubenswrapper[4620]: E0129 15:21:36.874497 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:21:42 crc kubenswrapper[4620]: E0129 15:21:42.874157 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:21:42 crc kubenswrapper[4620]: E0129 15:21:42.875123 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" Jan 29 15:21:47 crc kubenswrapper[4620]: E0129 15:21:47.873809 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:21:50 crc kubenswrapper[4620]: E0129 15:21:50.877881 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/frr-rhel9@sha256:b4dd345e67e0d4f80968f2f04aac4d5da1ce02a3b880502867e03d4fa46d3862\\\"\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podUID="fd14327c-f27e-4e0b-9955-a9cc76e1c253" Jan 29 15:21:55 crc kubenswrapper[4620]: E0129 15:21:55.832186 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:21:57 crc kubenswrapper[4620]: I0129 15:21:57.545707 4620 generic.go:334] "Generic (PLEG): container finished" podID="e6981bff-0689-47b1-96f6-f265340449f9" containerID="9a268b92c22b7af312033cd98a4e11da62876b5c2bee262402c0bef77db4f269" exitCode=0 Jan 29 15:21:57 crc kubenswrapper[4620]: I0129 15:21:57.546057 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h6cmz" event={"ID":"e6981bff-0689-47b1-96f6-f265340449f9","Type":"ContainerDied","Data":"9a268b92c22b7af312033cd98a4e11da62876b5c2bee262402c0bef77db4f269"} Jan 29 15:21:57 crc kubenswrapper[4620]: I0129 15:21:57.549563 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql7jk" event={"ID":"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50","Type":"ContainerStarted","Data":"99c57dbc46b7eca9790e3bce5aa0abb6d94c8cef2937b428de6d1964bb697255"} Jan 29 15:21:58 crc kubenswrapper[4620]: I0129 15:21:58.559361 4620 generic.go:334] "Generic (PLEG): container finished" podID="e6981bff-0689-47b1-96f6-f265340449f9" containerID="5724f311e04d46cfc7e986b85f6e60559f284c57b5826e8f4515b2fc9a487d78" exitCode=0 Jan 29 15:21:58 crc kubenswrapper[4620]: I0129 15:21:58.559463 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h6cmz" event={"ID":"e6981bff-0689-47b1-96f6-f265340449f9","Type":"ContainerDied","Data":"5724f311e04d46cfc7e986b85f6e60559f284c57b5826e8f4515b2fc9a487d78"} Jan 29 15:21:59 crc kubenswrapper[4620]: I0129 15:21:59.567195 4620 generic.go:334] "Generic (PLEG): container finished" podID="e6981bff-0689-47b1-96f6-f265340449f9" containerID="90f13ab8118a3323fbfabbc2ad37e27a553227a8ca66594ca77c5d78245ea8d8" exitCode=0 Jan 29 15:21:59 crc kubenswrapper[4620]: I0129 15:21:59.567234 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h6cmz" event={"ID":"e6981bff-0689-47b1-96f6-f265340449f9","Type":"ContainerDied","Data":"90f13ab8118a3323fbfabbc2ad37e27a553227a8ca66594ca77c5d78245ea8d8"} Jan 29 15:21:59 crc kubenswrapper[4620]: E0129 15:21:59.873994 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" Jan 29 15:22:00 crc kubenswrapper[4620]: I0129 15:22:00.583295 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h6cmz" event={"ID":"e6981bff-0689-47b1-96f6-f265340449f9","Type":"ContainerStarted","Data":"821e3c7287f75a5cadfe46fc962f84e2eca8a825813c55217fc9b80e6573fcb8"} Jan 29 15:22:00 crc kubenswrapper[4620]: I0129 15:22:00.583360 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h6cmz" event={"ID":"e6981bff-0689-47b1-96f6-f265340449f9","Type":"ContainerStarted","Data":"89aa551873674d0989d71561283ee1fe9d52050d08df667f368abaf0f80d4eb8"} Jan 29 15:22:00 crc kubenswrapper[4620]: I0129 15:22:00.583374 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h6cmz" event={"ID":"e6981bff-0689-47b1-96f6-f265340449f9","Type":"ContainerStarted","Data":"a4de1d7fbca2171063ae2f0264a8f33048e5c6e4312e989a559e562072ed7c07"} Jan 29 15:22:00 crc kubenswrapper[4620]: I0129 15:22:00.583388 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h6cmz" event={"ID":"e6981bff-0689-47b1-96f6-f265340449f9","Type":"ContainerStarted","Data":"2ed22b0da485e8a916822726ebae515c94b86c28118c8b75bbe83eca192b8a49"} Jan 29 15:22:01 crc kubenswrapper[4620]: I0129 15:22:01.598575 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h6cmz" event={"ID":"e6981bff-0689-47b1-96f6-f265340449f9","Type":"ContainerStarted","Data":"5f72199c3498cb3338e3e59acdc602112c450d38160ccca9f62828c75d9788e0"} Jan 29 15:22:01 crc kubenswrapper[4620]: I0129 15:22:01.599481 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:22:01 crc kubenswrapper[4620]: I0129 15:22:01.599583 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-h6cmz" event={"ID":"e6981bff-0689-47b1-96f6-f265340449f9","Type":"ContainerStarted","Data":"62bc41308904b5b18f8aab1b9bf649f3707de23d5fa2242503631292d4de8867"} Jan 29 15:22:01 crc kubenswrapper[4620]: I0129 15:22:01.637320 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-h6cmz" podStartSLOduration=6.600780302 podStartE2EDuration="5m53.637213229s" podCreationTimestamp="2026-01-29 15:16:08 +0000 UTC" firstStartedPulling="2026-01-29 15:16:09.35893748 +0000 UTC m=+909.971765125" lastFinishedPulling="2026-01-29 15:21:56.395370407 +0000 UTC m=+1257.008198052" observedRunningTime="2026-01-29 15:22:01.628092933 +0000 UTC m=+1262.240920578" watchObservedRunningTime="2026-01-29 15:22:01.637213229 +0000 UTC m=+1262.250040874" Jan 29 15:22:03 crc kubenswrapper[4620]: I0129 15:22:03.618382 4620 generic.go:334] "Generic (PLEG): container finished" podID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" containerID="99c57dbc46b7eca9790e3bce5aa0abb6d94c8cef2937b428de6d1964bb697255" exitCode=0 Jan 29 15:22:03 crc kubenswrapper[4620]: I0129 15:22:03.618471 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql7jk" event={"ID":"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50","Type":"ContainerDied","Data":"99c57dbc46b7eca9790e3bce5aa0abb6d94c8cef2937b428de6d1964bb697255"} Jan 29 15:22:03 crc kubenswrapper[4620]: I0129 15:22:03.620351 4620 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:22:04 crc kubenswrapper[4620]: I0129 15:22:04.214036 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:22:04 crc kubenswrapper[4620]: I0129 15:22:04.262839 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:22:04 crc kubenswrapper[4620]: I0129 15:22:04.625430 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" event={"ID":"fd14327c-f27e-4e0b-9955-a9cc76e1c253","Type":"ContainerStarted","Data":"0aa963311472364f4a5dac3b351eef21d46ff9497be4478597732fcf1156e34a"} Jan 29 15:22:04 crc kubenswrapper[4620]: I0129 15:22:04.625592 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" Jan 29 15:22:04 crc kubenswrapper[4620]: I0129 15:22:04.627664 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql7jk" event={"ID":"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50","Type":"ContainerStarted","Data":"d1df97b741a863658f37edaf6a735b2e62ad885b819deee58c79f57266c96ec0"} Jan 29 15:22:04 crc kubenswrapper[4620]: I0129 15:22:04.642696 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" podStartSLOduration=-9223371680.212093 podStartE2EDuration="5m56.642682048s" podCreationTimestamp="2026-01-29 15:16:08 +0000 UTC" firstStartedPulling="2026-01-29 15:16:09.05919116 +0000 UTC m=+909.672018805" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:22:04.640328944 +0000 UTC m=+1265.253156599" watchObservedRunningTime="2026-01-29 15:22:04.642682048 +0000 UTC m=+1265.255509693" Jan 29 15:22:04 crc kubenswrapper[4620]: I0129 15:22:04.658917 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ql7jk" podStartSLOduration=3.04298733 podStartE2EDuration="5m55.658898249s" podCreationTimestamp="2026-01-29 15:16:09 +0000 UTC" firstStartedPulling="2026-01-29 15:16:11.559192846 +0000 UTC m=+912.172020491" lastFinishedPulling="2026-01-29 15:22:04.175103755 +0000 UTC m=+1264.787931410" observedRunningTime="2026-01-29 15:22:04.658655971 +0000 UTC m=+1265.271483616" watchObservedRunningTime="2026-01-29 15:22:04.658898249 +0000 UTC m=+1265.271725904" Jan 29 15:22:08 crc kubenswrapper[4620]: E0129 15:22:08.874519 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" Jan 29 15:22:10 crc kubenswrapper[4620]: I0129 15:22:10.227249 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:22:10 crc kubenswrapper[4620]: I0129 15:22:10.227299 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:22:10 crc kubenswrapper[4620]: I0129 15:22:10.279296 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:22:10 crc kubenswrapper[4620]: I0129 15:22:10.736407 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:22:10 crc kubenswrapper[4620]: I0129 15:22:10.780991 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ql7jk"] Jan 29 15:22:12 crc kubenswrapper[4620]: I0129 15:22:12.701804 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ql7jk" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" containerName="registry-server" containerID="cri-o://d1df97b741a863658f37edaf6a735b2e62ad885b819deee58c79f57266c96ec0" gracePeriod=2 Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.096562 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.229350 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-utilities\") pod \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\" (UID: \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\") " Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.229521 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrq6w\" (UniqueName: \"kubernetes.io/projected/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-kube-api-access-mrq6w\") pod \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\" (UID: \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\") " Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.229649 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-catalog-content\") pod \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\" (UID: \"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50\") " Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.230990 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-utilities" (OuterVolumeSpecName: "utilities") pod "4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" (UID: "4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.231379 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.234283 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-kube-api-access-mrq6w" (OuterVolumeSpecName: "kube-api-access-mrq6w") pod "4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" (UID: "4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50"). InnerVolumeSpecName "kube-api-access-mrq6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.310656 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" (UID: "4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.333549 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.333656 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrq6w\" (UniqueName: \"kubernetes.io/projected/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50-kube-api-access-mrq6w\") on node \"crc\" DevicePath \"\"" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.711047 4620 generic.go:334] "Generic (PLEG): container finished" podID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" containerID="d1df97b741a863658f37edaf6a735b2e62ad885b819deee58c79f57266c96ec0" exitCode=0 Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.711096 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql7jk" event={"ID":"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50","Type":"ContainerDied","Data":"d1df97b741a863658f37edaf6a735b2e62ad885b819deee58c79f57266c96ec0"} Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.711141 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql7jk" event={"ID":"4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50","Type":"ContainerDied","Data":"a358f901415effc99790ea758b3319d125d3aafb853df83716d135fc628c37f7"} Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.711165 4620 scope.go:117] "RemoveContainer" containerID="d1df97b741a863658f37edaf6a735b2e62ad885b819deee58c79f57266c96ec0" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.711192 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ql7jk" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.733414 4620 scope.go:117] "RemoveContainer" containerID="99c57dbc46b7eca9790e3bce5aa0abb6d94c8cef2937b428de6d1964bb697255" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.750454 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ql7jk"] Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.754968 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ql7jk"] Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.755948 4620 scope.go:117] "RemoveContainer" containerID="7559af10a8a7f88ac80b6f008328ef85d6db65edffd67f084083584b7543af20" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.777893 4620 scope.go:117] "RemoveContainer" containerID="d1df97b741a863658f37edaf6a735b2e62ad885b819deee58c79f57266c96ec0" Jan 29 15:22:13 crc kubenswrapper[4620]: E0129 15:22:13.778266 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1df97b741a863658f37edaf6a735b2e62ad885b819deee58c79f57266c96ec0\": container with ID starting with d1df97b741a863658f37edaf6a735b2e62ad885b819deee58c79f57266c96ec0 not found: ID does not exist" containerID="d1df97b741a863658f37edaf6a735b2e62ad885b819deee58c79f57266c96ec0" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.778289 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1df97b741a863658f37edaf6a735b2e62ad885b819deee58c79f57266c96ec0"} err="failed to get container status \"d1df97b741a863658f37edaf6a735b2e62ad885b819deee58c79f57266c96ec0\": rpc error: code = NotFound desc = could not find container \"d1df97b741a863658f37edaf6a735b2e62ad885b819deee58c79f57266c96ec0\": container with ID starting with d1df97b741a863658f37edaf6a735b2e62ad885b819deee58c79f57266c96ec0 not found: ID does not exist" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.778307 4620 scope.go:117] "RemoveContainer" containerID="99c57dbc46b7eca9790e3bce5aa0abb6d94c8cef2937b428de6d1964bb697255" Jan 29 15:22:13 crc kubenswrapper[4620]: E0129 15:22:13.778588 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99c57dbc46b7eca9790e3bce5aa0abb6d94c8cef2937b428de6d1964bb697255\": container with ID starting with 99c57dbc46b7eca9790e3bce5aa0abb6d94c8cef2937b428de6d1964bb697255 not found: ID does not exist" containerID="99c57dbc46b7eca9790e3bce5aa0abb6d94c8cef2937b428de6d1964bb697255" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.778638 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c57dbc46b7eca9790e3bce5aa0abb6d94c8cef2937b428de6d1964bb697255"} err="failed to get container status \"99c57dbc46b7eca9790e3bce5aa0abb6d94c8cef2937b428de6d1964bb697255\": rpc error: code = NotFound desc = could not find container \"99c57dbc46b7eca9790e3bce5aa0abb6d94c8cef2937b428de6d1964bb697255\": container with ID starting with 99c57dbc46b7eca9790e3bce5aa0abb6d94c8cef2937b428de6d1964bb697255 not found: ID does not exist" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.779684 4620 scope.go:117] "RemoveContainer" containerID="7559af10a8a7f88ac80b6f008328ef85d6db65edffd67f084083584b7543af20" Jan 29 15:22:13 crc kubenswrapper[4620]: E0129 15:22:13.780081 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7559af10a8a7f88ac80b6f008328ef85d6db65edffd67f084083584b7543af20\": container with ID starting with 7559af10a8a7f88ac80b6f008328ef85d6db65edffd67f084083584b7543af20 not found: ID does not exist" containerID="7559af10a8a7f88ac80b6f008328ef85d6db65edffd67f084083584b7543af20" Jan 29 15:22:13 crc kubenswrapper[4620]: I0129 15:22:13.780114 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7559af10a8a7f88ac80b6f008328ef85d6db65edffd67f084083584b7543af20"} err="failed to get container status \"7559af10a8a7f88ac80b6f008328ef85d6db65edffd67f084083584b7543af20\": rpc error: code = NotFound desc = could not find container \"7559af10a8a7f88ac80b6f008328ef85d6db65edffd67f084083584b7543af20\": container with ID starting with 7559af10a8a7f88ac80b6f008328ef85d6db65edffd67f084083584b7543af20 not found: ID does not exist" Jan 29 15:22:14 crc kubenswrapper[4620]: I0129 15:22:14.720997 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg6jb" event={"ID":"83c6763d-3a90-4ded-b50c-57eb36ad1c0d","Type":"ContainerStarted","Data":"8edce13be730d96a6d7dad20e391d1b29ca6e868ba4ab5faef9f6d8afb99c1da"} Jan 29 15:22:14 crc kubenswrapper[4620]: I0129 15:22:14.893336 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" path="/var/lib/kubelet/pods/4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50/volumes" Jan 29 15:22:15 crc kubenswrapper[4620]: I0129 15:22:15.730952 4620 generic.go:334] "Generic (PLEG): container finished" podID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" containerID="8edce13be730d96a6d7dad20e391d1b29ca6e868ba4ab5faef9f6d8afb99c1da" exitCode=0 Jan 29 15:22:15 crc kubenswrapper[4620]: I0129 15:22:15.731012 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg6jb" event={"ID":"83c6763d-3a90-4ded-b50c-57eb36ad1c0d","Type":"ContainerDied","Data":"8edce13be730d96a6d7dad20e391d1b29ca6e868ba4ab5faef9f6d8afb99c1da"} Jan 29 15:22:16 crc kubenswrapper[4620]: I0129 15:22:16.738977 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg6jb" event={"ID":"83c6763d-3a90-4ded-b50c-57eb36ad1c0d","Type":"ContainerStarted","Data":"5747cfac9e7118d3c401e78270e1ec27ee0a58c0349972354f6cb8db34da43d5"} Jan 29 15:22:16 crc kubenswrapper[4620]: I0129 15:22:16.771056 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sg6jb" podStartSLOduration=1.95960907 podStartE2EDuration="5m58.771032435s" podCreationTimestamp="2026-01-29 15:16:18 +0000 UTC" firstStartedPulling="2026-01-29 15:16:19.613189034 +0000 UTC m=+920.226016679" lastFinishedPulling="2026-01-29 15:22:16.424612399 +0000 UTC m=+1277.037440044" observedRunningTime="2026-01-29 15:22:16.765938375 +0000 UTC m=+1277.378766010" watchObservedRunningTime="2026-01-29 15:22:16.771032435 +0000 UTC m=+1277.383860100" Jan 29 15:22:18 crc kubenswrapper[4620]: I0129 15:22:18.608462 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:22:18 crc kubenswrapper[4620]: I0129 15:22:18.610151 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:22:18 crc kubenswrapper[4620]: I0129 15:22:18.654241 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:22:18 crc kubenswrapper[4620]: I0129 15:22:18.725191 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-klnxt" Jan 29 15:22:19 crc kubenswrapper[4620]: I0129 15:22:19.217891 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-h6cmz" Jan 29 15:22:21 crc kubenswrapper[4620]: I0129 15:22:21.780006 4620 generic.go:334] "Generic (PLEG): container finished" podID="8be789be-3047-4e86-84ef-9c0345fff20d" containerID="980674e9b9f95a3fe48501cb62f04c1287e51c7b65ce3bc08ffc986c34f0513c" exitCode=0 Jan 29 15:22:21 crc kubenswrapper[4620]: I0129 15:22:21.780094 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drv9l" event={"ID":"8be789be-3047-4e86-84ef-9c0345fff20d","Type":"ContainerDied","Data":"980674e9b9f95a3fe48501cb62f04c1287e51c7b65ce3bc08ffc986c34f0513c"} Jan 29 15:22:24 crc kubenswrapper[4620]: I0129 15:22:24.807981 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drv9l" event={"ID":"8be789be-3047-4e86-84ef-9c0345fff20d","Type":"ContainerStarted","Data":"93321e82cad2a675c19446d14461a756fe03f3bf8aa6d72d7542eceeafba6aaa"} Jan 29 15:22:24 crc kubenswrapper[4620]: I0129 15:22:24.837984 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-drv9l" podStartSLOduration=3.362462597 podStartE2EDuration="5m43.837965409s" podCreationTimestamp="2026-01-29 15:16:41 +0000 UTC" firstStartedPulling="2026-01-29 15:16:43.790139044 +0000 UTC m=+944.402966699" lastFinishedPulling="2026-01-29 15:22:24.265641866 +0000 UTC m=+1284.878469511" observedRunningTime="2026-01-29 15:22:24.833383635 +0000 UTC m=+1285.446211290" watchObservedRunningTime="2026-01-29 15:22:24.837965409 +0000 UTC m=+1285.450793054" Jan 29 15:22:28 crc kubenswrapper[4620]: I0129 15:22:28.656490 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:22:28 crc kubenswrapper[4620]: I0129 15:22:28.707358 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg6jb"] Jan 29 15:22:28 crc kubenswrapper[4620]: I0129 15:22:28.849270 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sg6jb" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" containerName="registry-server" containerID="cri-o://5747cfac9e7118d3c401e78270e1ec27ee0a58c0349972354f6cb8db34da43d5" gracePeriod=2 Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.759702 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.858916 4620 generic.go:334] "Generic (PLEG): container finished" podID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" containerID="5747cfac9e7118d3c401e78270e1ec27ee0a58c0349972354f6cb8db34da43d5" exitCode=0 Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.858994 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg6jb" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.858987 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg6jb" event={"ID":"83c6763d-3a90-4ded-b50c-57eb36ad1c0d","Type":"ContainerDied","Data":"5747cfac9e7118d3c401e78270e1ec27ee0a58c0349972354f6cb8db34da43d5"} Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.859154 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg6jb" event={"ID":"83c6763d-3a90-4ded-b50c-57eb36ad1c0d","Type":"ContainerDied","Data":"d088e44d55acb68c349805eda31938c8e3cfbb23e60ab3262ffd6d6e5b7535e7"} Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.859189 4620 scope.go:117] "RemoveContainer" containerID="5747cfac9e7118d3c401e78270e1ec27ee0a58c0349972354f6cb8db34da43d5" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.873231 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvbcx\" (UniqueName: \"kubernetes.io/projected/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-kube-api-access-fvbcx\") pod \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\" (UID: \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\") " Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.873367 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-catalog-content\") pod \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\" (UID: \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\") " Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.873404 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-utilities\") pod \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\" (UID: \"83c6763d-3a90-4ded-b50c-57eb36ad1c0d\") " Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.874316 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-utilities" (OuterVolumeSpecName: "utilities") pod "83c6763d-3a90-4ded-b50c-57eb36ad1c0d" (UID: "83c6763d-3a90-4ded-b50c-57eb36ad1c0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.875596 4620 scope.go:117] "RemoveContainer" containerID="8edce13be730d96a6d7dad20e391d1b29ca6e868ba4ab5faef9f6d8afb99c1da" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.881774 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-kube-api-access-fvbcx" (OuterVolumeSpecName: "kube-api-access-fvbcx") pod "83c6763d-3a90-4ded-b50c-57eb36ad1c0d" (UID: "83c6763d-3a90-4ded-b50c-57eb36ad1c0d"). InnerVolumeSpecName "kube-api-access-fvbcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.896726 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83c6763d-3a90-4ded-b50c-57eb36ad1c0d" (UID: "83c6763d-3a90-4ded-b50c-57eb36ad1c0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.908018 4620 scope.go:117] "RemoveContainer" containerID="d6245e39e18ff5886de81f46e5371900d12e527343ee1736559406533f7f8f32" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.930529 4620 scope.go:117] "RemoveContainer" containerID="5747cfac9e7118d3c401e78270e1ec27ee0a58c0349972354f6cb8db34da43d5" Jan 29 15:22:29 crc kubenswrapper[4620]: E0129 15:22:29.931287 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5747cfac9e7118d3c401e78270e1ec27ee0a58c0349972354f6cb8db34da43d5\": container with ID starting with 5747cfac9e7118d3c401e78270e1ec27ee0a58c0349972354f6cb8db34da43d5 not found: ID does not exist" containerID="5747cfac9e7118d3c401e78270e1ec27ee0a58c0349972354f6cb8db34da43d5" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.931316 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5747cfac9e7118d3c401e78270e1ec27ee0a58c0349972354f6cb8db34da43d5"} err="failed to get container status \"5747cfac9e7118d3c401e78270e1ec27ee0a58c0349972354f6cb8db34da43d5\": rpc error: code = NotFound desc = could not find container \"5747cfac9e7118d3c401e78270e1ec27ee0a58c0349972354f6cb8db34da43d5\": container with ID starting with 5747cfac9e7118d3c401e78270e1ec27ee0a58c0349972354f6cb8db34da43d5 not found: ID does not exist" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.931343 4620 scope.go:117] "RemoveContainer" containerID="8edce13be730d96a6d7dad20e391d1b29ca6e868ba4ab5faef9f6d8afb99c1da" Jan 29 15:22:29 crc kubenswrapper[4620]: E0129 15:22:29.931708 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8edce13be730d96a6d7dad20e391d1b29ca6e868ba4ab5faef9f6d8afb99c1da\": container with ID starting with 8edce13be730d96a6d7dad20e391d1b29ca6e868ba4ab5faef9f6d8afb99c1da not found: ID does not exist" containerID="8edce13be730d96a6d7dad20e391d1b29ca6e868ba4ab5faef9f6d8afb99c1da" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.931737 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8edce13be730d96a6d7dad20e391d1b29ca6e868ba4ab5faef9f6d8afb99c1da"} err="failed to get container status \"8edce13be730d96a6d7dad20e391d1b29ca6e868ba4ab5faef9f6d8afb99c1da\": rpc error: code = NotFound desc = could not find container \"8edce13be730d96a6d7dad20e391d1b29ca6e868ba4ab5faef9f6d8afb99c1da\": container with ID starting with 8edce13be730d96a6d7dad20e391d1b29ca6e868ba4ab5faef9f6d8afb99c1da not found: ID does not exist" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.931752 4620 scope.go:117] "RemoveContainer" containerID="d6245e39e18ff5886de81f46e5371900d12e527343ee1736559406533f7f8f32" Jan 29 15:22:29 crc kubenswrapper[4620]: E0129 15:22:29.932180 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6245e39e18ff5886de81f46e5371900d12e527343ee1736559406533f7f8f32\": container with ID starting with d6245e39e18ff5886de81f46e5371900d12e527343ee1736559406533f7f8f32 not found: ID does not exist" containerID="d6245e39e18ff5886de81f46e5371900d12e527343ee1736559406533f7f8f32" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.932249 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6245e39e18ff5886de81f46e5371900d12e527343ee1736559406533f7f8f32"} err="failed to get container status \"d6245e39e18ff5886de81f46e5371900d12e527343ee1736559406533f7f8f32\": rpc error: code = NotFound desc = could not find container \"d6245e39e18ff5886de81f46e5371900d12e527343ee1736559406533f7f8f32\": container with ID starting with d6245e39e18ff5886de81f46e5371900d12e527343ee1736559406533f7f8f32 not found: ID does not exist" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.975431 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.975480 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:22:29 crc kubenswrapper[4620]: I0129 15:22:29.975500 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvbcx\" (UniqueName: \"kubernetes.io/projected/83c6763d-3a90-4ded-b50c-57eb36ad1c0d-kube-api-access-fvbcx\") on node \"crc\" DevicePath \"\"" Jan 29 15:22:30 crc kubenswrapper[4620]: I0129 15:22:30.211685 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg6jb"] Jan 29 15:22:30 crc kubenswrapper[4620]: I0129 15:22:30.219325 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg6jb"] Jan 29 15:22:30 crc kubenswrapper[4620]: I0129 15:22:30.885045 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" path="/var/lib/kubelet/pods/83c6763d-3a90-4ded-b50c-57eb36ad1c0d/volumes" Jan 29 15:22:32 crc kubenswrapper[4620]: I0129 15:22:32.255684 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:22:32 crc kubenswrapper[4620]: I0129 15:22:32.256049 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:22:32 crc kubenswrapper[4620]: I0129 15:22:32.297457 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:22:32 crc kubenswrapper[4620]: I0129 15:22:32.915379 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:22:33 crc kubenswrapper[4620]: I0129 15:22:33.291636 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drv9l"] Jan 29 15:22:34 crc kubenswrapper[4620]: I0129 15:22:34.893925 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-drv9l" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" containerName="registry-server" containerID="cri-o://93321e82cad2a675c19446d14461a756fe03f3bf8aa6d72d7542eceeafba6aaa" gracePeriod=2 Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.297666 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.355583 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4hll\" (UniqueName: \"kubernetes.io/projected/8be789be-3047-4e86-84ef-9c0345fff20d-kube-api-access-k4hll\") pod \"8be789be-3047-4e86-84ef-9c0345fff20d\" (UID: \"8be789be-3047-4e86-84ef-9c0345fff20d\") " Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.355633 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be789be-3047-4e86-84ef-9c0345fff20d-catalog-content\") pod \"8be789be-3047-4e86-84ef-9c0345fff20d\" (UID: \"8be789be-3047-4e86-84ef-9c0345fff20d\") " Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.355735 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be789be-3047-4e86-84ef-9c0345fff20d-utilities\") pod \"8be789be-3047-4e86-84ef-9c0345fff20d\" (UID: \"8be789be-3047-4e86-84ef-9c0345fff20d\") " Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.356395 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8be789be-3047-4e86-84ef-9c0345fff20d-utilities" (OuterVolumeSpecName: "utilities") pod "8be789be-3047-4e86-84ef-9c0345fff20d" (UID: "8be789be-3047-4e86-84ef-9c0345fff20d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.373209 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be789be-3047-4e86-84ef-9c0345fff20d-kube-api-access-k4hll" (OuterVolumeSpecName: "kube-api-access-k4hll") pod "8be789be-3047-4e86-84ef-9c0345fff20d" (UID: "8be789be-3047-4e86-84ef-9c0345fff20d"). InnerVolumeSpecName "kube-api-access-k4hll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.414356 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8be789be-3047-4e86-84ef-9c0345fff20d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8be789be-3047-4e86-84ef-9c0345fff20d" (UID: "8be789be-3047-4e86-84ef-9c0345fff20d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.456994 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be789be-3047-4e86-84ef-9c0345fff20d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.457034 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4hll\" (UniqueName: \"kubernetes.io/projected/8be789be-3047-4e86-84ef-9c0345fff20d-kube-api-access-k4hll\") on node \"crc\" DevicePath \"\"" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.457045 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be789be-3047-4e86-84ef-9c0345fff20d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.900706 4620 generic.go:334] "Generic (PLEG): container finished" podID="8be789be-3047-4e86-84ef-9c0345fff20d" containerID="93321e82cad2a675c19446d14461a756fe03f3bf8aa6d72d7542eceeafba6aaa" exitCode=0 Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.900806 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drv9l" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.900822 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drv9l" event={"ID":"8be789be-3047-4e86-84ef-9c0345fff20d","Type":"ContainerDied","Data":"93321e82cad2a675c19446d14461a756fe03f3bf8aa6d72d7542eceeafba6aaa"} Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.902147 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drv9l" event={"ID":"8be789be-3047-4e86-84ef-9c0345fff20d","Type":"ContainerDied","Data":"56cac9a5f67808e8425d121be32299c2341208866ad8ee2aa2a86daa87012a0f"} Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.902167 4620 scope.go:117] "RemoveContainer" containerID="93321e82cad2a675c19446d14461a756fe03f3bf8aa6d72d7542eceeafba6aaa" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.920320 4620 scope.go:117] "RemoveContainer" containerID="980674e9b9f95a3fe48501cb62f04c1287e51c7b65ce3bc08ffc986c34f0513c" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.943425 4620 scope.go:117] "RemoveContainer" containerID="cf0a711d0d596cecdbaf9d79a56f727d7544ad99f878c637a2d45b504144f292" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.945927 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drv9l"] Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.951173 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-drv9l"] Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.973937 4620 scope.go:117] "RemoveContainer" containerID="93321e82cad2a675c19446d14461a756fe03f3bf8aa6d72d7542eceeafba6aaa" Jan 29 15:22:35 crc kubenswrapper[4620]: E0129 15:22:35.974515 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93321e82cad2a675c19446d14461a756fe03f3bf8aa6d72d7542eceeafba6aaa\": container with ID starting with 93321e82cad2a675c19446d14461a756fe03f3bf8aa6d72d7542eceeafba6aaa not found: ID does not exist" containerID="93321e82cad2a675c19446d14461a756fe03f3bf8aa6d72d7542eceeafba6aaa" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.974550 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93321e82cad2a675c19446d14461a756fe03f3bf8aa6d72d7542eceeafba6aaa"} err="failed to get container status \"93321e82cad2a675c19446d14461a756fe03f3bf8aa6d72d7542eceeafba6aaa\": rpc error: code = NotFound desc = could not find container \"93321e82cad2a675c19446d14461a756fe03f3bf8aa6d72d7542eceeafba6aaa\": container with ID starting with 93321e82cad2a675c19446d14461a756fe03f3bf8aa6d72d7542eceeafba6aaa not found: ID does not exist" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.974595 4620 scope.go:117] "RemoveContainer" containerID="980674e9b9f95a3fe48501cb62f04c1287e51c7b65ce3bc08ffc986c34f0513c" Jan 29 15:22:35 crc kubenswrapper[4620]: E0129 15:22:35.975073 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"980674e9b9f95a3fe48501cb62f04c1287e51c7b65ce3bc08ffc986c34f0513c\": container with ID starting with 980674e9b9f95a3fe48501cb62f04c1287e51c7b65ce3bc08ffc986c34f0513c not found: ID does not exist" containerID="980674e9b9f95a3fe48501cb62f04c1287e51c7b65ce3bc08ffc986c34f0513c" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.975103 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"980674e9b9f95a3fe48501cb62f04c1287e51c7b65ce3bc08ffc986c34f0513c"} err="failed to get container status \"980674e9b9f95a3fe48501cb62f04c1287e51c7b65ce3bc08ffc986c34f0513c\": rpc error: code = NotFound desc = could not find container \"980674e9b9f95a3fe48501cb62f04c1287e51c7b65ce3bc08ffc986c34f0513c\": container with ID starting with 980674e9b9f95a3fe48501cb62f04c1287e51c7b65ce3bc08ffc986c34f0513c not found: ID does not exist" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.975122 4620 scope.go:117] "RemoveContainer" containerID="cf0a711d0d596cecdbaf9d79a56f727d7544ad99f878c637a2d45b504144f292" Jan 29 15:22:35 crc kubenswrapper[4620]: E0129 15:22:35.975427 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf0a711d0d596cecdbaf9d79a56f727d7544ad99f878c637a2d45b504144f292\": container with ID starting with cf0a711d0d596cecdbaf9d79a56f727d7544ad99f878c637a2d45b504144f292 not found: ID does not exist" containerID="cf0a711d0d596cecdbaf9d79a56f727d7544ad99f878c637a2d45b504144f292" Jan 29 15:22:35 crc kubenswrapper[4620]: I0129 15:22:35.975455 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf0a711d0d596cecdbaf9d79a56f727d7544ad99f878c637a2d45b504144f292"} err="failed to get container status \"cf0a711d0d596cecdbaf9d79a56f727d7544ad99f878c637a2d45b504144f292\": rpc error: code = NotFound desc = could not find container \"cf0a711d0d596cecdbaf9d79a56f727d7544ad99f878c637a2d45b504144f292\": container with ID starting with cf0a711d0d596cecdbaf9d79a56f727d7544ad99f878c637a2d45b504144f292 not found: ID does not exist" Jan 29 15:22:36 crc kubenswrapper[4620]: I0129 15:22:36.890902 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" path="/var/lib/kubelet/pods/8be789be-3047-4e86-84ef-9c0345fff20d/volumes" Jan 29 15:23:04 crc kubenswrapper[4620]: I0129 15:23:04.110797 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:23:04 crc kubenswrapper[4620]: I0129 15:23:04.111325 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:23:34 crc kubenswrapper[4620]: I0129 15:23:34.110916 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:23:34 crc kubenswrapper[4620]: I0129 15:23:34.111490 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:24:04 crc kubenswrapper[4620]: I0129 15:24:04.111141 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:24:04 crc kubenswrapper[4620]: I0129 15:24:04.111637 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:24:04 crc kubenswrapper[4620]: I0129 15:24:04.111678 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:24:04 crc kubenswrapper[4620]: I0129 15:24:04.112242 4620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"167b6c8fefd91c16d6187c62bc6ae99ac84e11ac864df648cfd287babfd6eed6"} pod="openshift-machine-config-operator/machine-config-daemon-7469t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:24:04 crc kubenswrapper[4620]: I0129 15:24:04.112285 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" containerID="cri-o://167b6c8fefd91c16d6187c62bc6ae99ac84e11ac864df648cfd287babfd6eed6" gracePeriod=600 Jan 29 15:24:04 crc kubenswrapper[4620]: I0129 15:24:04.470926 4620 generic.go:334] "Generic (PLEG): container finished" podID="a76cce43-3d01-4158-b23a-e21fd5927792" containerID="167b6c8fefd91c16d6187c62bc6ae99ac84e11ac864df648cfd287babfd6eed6" exitCode=0 Jan 29 15:24:04 crc kubenswrapper[4620]: I0129 15:24:04.470981 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerDied","Data":"167b6c8fefd91c16d6187c62bc6ae99ac84e11ac864df648cfd287babfd6eed6"} Jan 29 15:24:04 crc kubenswrapper[4620]: I0129 15:24:04.471018 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c"} Jan 29 15:24:04 crc kubenswrapper[4620]: I0129 15:24:04.471037 4620 scope.go:117] "RemoveContainer" containerID="871cbbae8f526a267583740516b31e7ceb7222ae119d96420315de0c8548a400" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.975821 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-82f6x"] Jan 29 15:24:15 crc kubenswrapper[4620]: E0129 15:24:15.976654 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" containerName="extract-utilities" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.976666 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" containerName="extract-utilities" Jan 29 15:24:15 crc kubenswrapper[4620]: E0129 15:24:15.976679 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" containerName="registry-server" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.976685 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" containerName="registry-server" Jan 29 15:24:15 crc kubenswrapper[4620]: E0129 15:24:15.976694 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" containerName="extract-content" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.976701 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" containerName="extract-content" Jan 29 15:24:15 crc kubenswrapper[4620]: E0129 15:24:15.976711 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" containerName="extract-content" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.976716 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" containerName="extract-content" Jan 29 15:24:15 crc kubenswrapper[4620]: E0129 15:24:15.976726 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" containerName="registry-server" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.976732 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" containerName="registry-server" Jan 29 15:24:15 crc kubenswrapper[4620]: E0129 15:24:15.976741 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" containerName="registry-server" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.976746 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" containerName="registry-server" Jan 29 15:24:15 crc kubenswrapper[4620]: E0129 15:24:15.976772 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" containerName="extract-utilities" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.976778 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" containerName="extract-utilities" Jan 29 15:24:15 crc kubenswrapper[4620]: E0129 15:24:15.976788 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" containerName="extract-utilities" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.976793 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" containerName="extract-utilities" Jan 29 15:24:15 crc kubenswrapper[4620]: E0129 15:24:15.976801 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" containerName="extract-content" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.976806 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" containerName="extract-content" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.976938 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c6763d-3a90-4ded-b50c-57eb36ad1c0d" containerName="registry-server" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.976956 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be789be-3047-4e86-84ef-9c0345fff20d" containerName="registry-server" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.976967 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6a08c8-d1a8-49dc-a3e7-8d227b9a3c50" containerName="registry-server" Jan 29 15:24:15 crc kubenswrapper[4620]: I0129 15:24:15.978024 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:16 crc kubenswrapper[4620]: I0129 15:24:16.002253 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82f6x"] Jan 29 15:24:16 crc kubenswrapper[4620]: I0129 15:24:16.109738 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe06b8b2-3256-4eb9-8456-dccfc8f61044-catalog-content\") pod \"redhat-operators-82f6x\" (UID: \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\") " pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:16 crc kubenswrapper[4620]: I0129 15:24:16.109808 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzzxw\" (UniqueName: \"kubernetes.io/projected/fe06b8b2-3256-4eb9-8456-dccfc8f61044-kube-api-access-gzzxw\") pod \"redhat-operators-82f6x\" (UID: \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\") " pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:16 crc kubenswrapper[4620]: I0129 15:24:16.109891 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe06b8b2-3256-4eb9-8456-dccfc8f61044-utilities\") pod \"redhat-operators-82f6x\" (UID: \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\") " pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:16 crc kubenswrapper[4620]: I0129 15:24:16.211697 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe06b8b2-3256-4eb9-8456-dccfc8f61044-utilities\") pod \"redhat-operators-82f6x\" (UID: \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\") " pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:16 crc kubenswrapper[4620]: I0129 15:24:16.211831 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe06b8b2-3256-4eb9-8456-dccfc8f61044-catalog-content\") pod \"redhat-operators-82f6x\" (UID: \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\") " pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:16 crc kubenswrapper[4620]: I0129 15:24:16.211857 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzzxw\" (UniqueName: \"kubernetes.io/projected/fe06b8b2-3256-4eb9-8456-dccfc8f61044-kube-api-access-gzzxw\") pod \"redhat-operators-82f6x\" (UID: \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\") " pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:16 crc kubenswrapper[4620]: I0129 15:24:16.212643 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe06b8b2-3256-4eb9-8456-dccfc8f61044-catalog-content\") pod \"redhat-operators-82f6x\" (UID: \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\") " pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:16 crc kubenswrapper[4620]: I0129 15:24:16.212643 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe06b8b2-3256-4eb9-8456-dccfc8f61044-utilities\") pod \"redhat-operators-82f6x\" (UID: \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\") " pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:16 crc kubenswrapper[4620]: I0129 15:24:16.231420 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzzxw\" (UniqueName: \"kubernetes.io/projected/fe06b8b2-3256-4eb9-8456-dccfc8f61044-kube-api-access-gzzxw\") pod \"redhat-operators-82f6x\" (UID: \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\") " pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:16 crc kubenswrapper[4620]: I0129 15:24:16.297483 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:16 crc kubenswrapper[4620]: I0129 15:24:16.739007 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82f6x"] Jan 29 15:24:17 crc kubenswrapper[4620]: I0129 15:24:17.548752 4620 generic.go:334] "Generic (PLEG): container finished" podID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" containerID="ca6d3e9ae2cee48e9b7995f5708bc282697679d789532508b190d28b1b436452" exitCode=0 Jan 29 15:24:17 crc kubenswrapper[4620]: I0129 15:24:17.549082 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82f6x" event={"ID":"fe06b8b2-3256-4eb9-8456-dccfc8f61044","Type":"ContainerDied","Data":"ca6d3e9ae2cee48e9b7995f5708bc282697679d789532508b190d28b1b436452"} Jan 29 15:24:17 crc kubenswrapper[4620]: I0129 15:24:17.549116 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82f6x" event={"ID":"fe06b8b2-3256-4eb9-8456-dccfc8f61044","Type":"ContainerStarted","Data":"bf14b33aa89112ff3b0e97ef8aaeebfc08bfaf15e54c259be1940787130980df"} Jan 29 15:24:17 crc kubenswrapper[4620]: E0129 15:24:17.690159 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:24:17 crc kubenswrapper[4620]: E0129 15:24:17.690321 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzzxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-82f6x_openshift-marketplace(fe06b8b2-3256-4eb9-8456-dccfc8f61044): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:24:17 crc kubenswrapper[4620]: E0129 15:24:17.692304 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-82f6x" podUID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" Jan 29 15:24:18 crc kubenswrapper[4620]: E0129 15:24:18.555875 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-82f6x" podUID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" Jan 29 15:24:34 crc kubenswrapper[4620]: I0129 15:24:34.672406 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82f6x" event={"ID":"fe06b8b2-3256-4eb9-8456-dccfc8f61044","Type":"ContainerStarted","Data":"2b0b9f67bb5df08eb2eb3fcf91f26182df7bb0a341ddabe0061176bfc3774f6a"} Jan 29 15:24:47 crc kubenswrapper[4620]: I0129 15:24:47.767063 4620 generic.go:334] "Generic (PLEG): container finished" podID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" containerID="2b0b9f67bb5df08eb2eb3fcf91f26182df7bb0a341ddabe0061176bfc3774f6a" exitCode=0 Jan 29 15:24:47 crc kubenswrapper[4620]: I0129 15:24:47.767180 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82f6x" event={"ID":"fe06b8b2-3256-4eb9-8456-dccfc8f61044","Type":"ContainerDied","Data":"2b0b9f67bb5df08eb2eb3fcf91f26182df7bb0a341ddabe0061176bfc3774f6a"} Jan 29 15:24:48 crc kubenswrapper[4620]: I0129 15:24:48.776864 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82f6x" event={"ID":"fe06b8b2-3256-4eb9-8456-dccfc8f61044","Type":"ContainerStarted","Data":"17099e5d94dfe6a45af2ca9b82165bb74d389dcfd7989bfdde1cbfe719d14248"} Jan 29 15:24:48 crc kubenswrapper[4620]: I0129 15:24:48.795390 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-82f6x" podStartSLOduration=2.9083617889999998 podStartE2EDuration="33.795373826s" podCreationTimestamp="2026-01-29 15:24:15 +0000 UTC" firstStartedPulling="2026-01-29 15:24:17.550439518 +0000 UTC m=+1398.163267163" lastFinishedPulling="2026-01-29 15:24:48.437451555 +0000 UTC m=+1429.050279200" observedRunningTime="2026-01-29 15:24:48.794788338 +0000 UTC m=+1429.407615983" watchObservedRunningTime="2026-01-29 15:24:48.795373826 +0000 UTC m=+1429.408201481" Jan 29 15:24:56 crc kubenswrapper[4620]: I0129 15:24:56.298076 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:56 crc kubenswrapper[4620]: I0129 15:24:56.299626 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:56 crc kubenswrapper[4620]: I0129 15:24:56.356221 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:56 crc kubenswrapper[4620]: I0129 15:24:56.870999 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:24:56 crc kubenswrapper[4620]: I0129 15:24:56.914992 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82f6x"] Jan 29 15:24:58 crc kubenswrapper[4620]: I0129 15:24:58.843276 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-82f6x" podUID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" containerName="registry-server" containerID="cri-o://17099e5d94dfe6a45af2ca9b82165bb74d389dcfd7989bfdde1cbfe719d14248" gracePeriod=2 Jan 29 15:24:59 crc kubenswrapper[4620]: I0129 15:24:59.860994 4620 generic.go:334] "Generic (PLEG): container finished" podID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" containerID="17099e5d94dfe6a45af2ca9b82165bb74d389dcfd7989bfdde1cbfe719d14248" exitCode=0 Jan 29 15:24:59 crc kubenswrapper[4620]: I0129 15:24:59.861182 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82f6x" event={"ID":"fe06b8b2-3256-4eb9-8456-dccfc8f61044","Type":"ContainerDied","Data":"17099e5d94dfe6a45af2ca9b82165bb74d389dcfd7989bfdde1cbfe719d14248"} Jan 29 15:24:59 crc kubenswrapper[4620]: I0129 15:24:59.861401 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82f6x" event={"ID":"fe06b8b2-3256-4eb9-8456-dccfc8f61044","Type":"ContainerDied","Data":"bf14b33aa89112ff3b0e97ef8aaeebfc08bfaf15e54c259be1940787130980df"} Jan 29 15:24:59 crc kubenswrapper[4620]: I0129 15:24:59.861425 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf14b33aa89112ff3b0e97ef8aaeebfc08bfaf15e54c259be1940787130980df" Jan 29 15:24:59 crc kubenswrapper[4620]: I0129 15:24:59.878911 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:25:00 crc kubenswrapper[4620]: I0129 15:25:00.046163 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzzxw\" (UniqueName: \"kubernetes.io/projected/fe06b8b2-3256-4eb9-8456-dccfc8f61044-kube-api-access-gzzxw\") pod \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\" (UID: \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\") " Jan 29 15:25:00 crc kubenswrapper[4620]: I0129 15:25:00.046428 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe06b8b2-3256-4eb9-8456-dccfc8f61044-catalog-content\") pod \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\" (UID: \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\") " Jan 29 15:25:00 crc kubenswrapper[4620]: I0129 15:25:00.046523 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe06b8b2-3256-4eb9-8456-dccfc8f61044-utilities\") pod \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\" (UID: \"fe06b8b2-3256-4eb9-8456-dccfc8f61044\") " Jan 29 15:25:00 crc kubenswrapper[4620]: I0129 15:25:00.047312 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe06b8b2-3256-4eb9-8456-dccfc8f61044-utilities" (OuterVolumeSpecName: "utilities") pod "fe06b8b2-3256-4eb9-8456-dccfc8f61044" (UID: "fe06b8b2-3256-4eb9-8456-dccfc8f61044"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:25:00 crc kubenswrapper[4620]: I0129 15:25:00.062365 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe06b8b2-3256-4eb9-8456-dccfc8f61044-kube-api-access-gzzxw" (OuterVolumeSpecName: "kube-api-access-gzzxw") pod "fe06b8b2-3256-4eb9-8456-dccfc8f61044" (UID: "fe06b8b2-3256-4eb9-8456-dccfc8f61044"). InnerVolumeSpecName "kube-api-access-gzzxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:25:00 crc kubenswrapper[4620]: I0129 15:25:00.147533 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe06b8b2-3256-4eb9-8456-dccfc8f61044-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:00 crc kubenswrapper[4620]: I0129 15:25:00.147750 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzzxw\" (UniqueName: \"kubernetes.io/projected/fe06b8b2-3256-4eb9-8456-dccfc8f61044-kube-api-access-gzzxw\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:00 crc kubenswrapper[4620]: I0129 15:25:00.172295 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe06b8b2-3256-4eb9-8456-dccfc8f61044-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe06b8b2-3256-4eb9-8456-dccfc8f61044" (UID: "fe06b8b2-3256-4eb9-8456-dccfc8f61044"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:25:00 crc kubenswrapper[4620]: I0129 15:25:00.250082 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe06b8b2-3256-4eb9-8456-dccfc8f61044-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:00 crc kubenswrapper[4620]: I0129 15:25:00.866982 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82f6x" Jan 29 15:25:00 crc kubenswrapper[4620]: I0129 15:25:00.907883 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82f6x"] Jan 29 15:25:00 crc kubenswrapper[4620]: I0129 15:25:00.912943 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-82f6x"] Jan 29 15:25:02 crc kubenswrapper[4620]: I0129 15:25:02.881518 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" path="/var/lib/kubelet/pods/fe06b8b2-3256-4eb9-8456-dccfc8f61044/volumes" Jan 29 15:25:10 crc kubenswrapper[4620]: E0129 15:25:10.283364 4620 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe06b8b2_3256_4eb9_8456_dccfc8f61044.slice\": RecentStats: unable to find data in memory cache]" Jan 29 15:25:20 crc kubenswrapper[4620]: E0129 15:25:20.453746 4620 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe06b8b2_3256_4eb9_8456_dccfc8f61044.slice\": RecentStats: unable to find data in memory cache]" Jan 29 15:25:30 crc kubenswrapper[4620]: E0129 15:25:30.649631 4620 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe06b8b2_3256_4eb9_8456_dccfc8f61044.slice\": RecentStats: unable to find data in memory cache]" Jan 29 15:25:40 crc kubenswrapper[4620]: E0129 15:25:40.818689 4620 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe06b8b2_3256_4eb9_8456_dccfc8f61044.slice\": RecentStats: unable to find data in memory cache]" Jan 29 15:25:51 crc kubenswrapper[4620]: E0129 15:25:51.008984 4620 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe06b8b2_3256_4eb9_8456_dccfc8f61044.slice\": RecentStats: unable to find data in memory cache]" Jan 29 15:26:04 crc kubenswrapper[4620]: I0129 15:26:04.111285 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:26:04 crc kubenswrapper[4620]: I0129 15:26:04.111768 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:26:34 crc kubenswrapper[4620]: I0129 15:26:34.111443 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:26:34 crc kubenswrapper[4620]: I0129 15:26:34.112003 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:27:04 crc kubenswrapper[4620]: I0129 15:27:04.111084 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:27:04 crc kubenswrapper[4620]: I0129 15:27:04.111611 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:27:04 crc kubenswrapper[4620]: I0129 15:27:04.111655 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:27:04 crc kubenswrapper[4620]: I0129 15:27:04.112228 4620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c"} pod="openshift-machine-config-operator/machine-config-daemon-7469t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:27:04 crc kubenswrapper[4620]: I0129 15:27:04.112293 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" containerID="cri-o://68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" gracePeriod=600 Jan 29 15:27:04 crc kubenswrapper[4620]: E0129 15:27:04.243388 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:27:04 crc kubenswrapper[4620]: I0129 15:27:04.846567 4620 generic.go:334] "Generic (PLEG): container finished" podID="a76cce43-3d01-4158-b23a-e21fd5927792" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" exitCode=0 Jan 29 15:27:04 crc kubenswrapper[4620]: I0129 15:27:04.846945 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerDied","Data":"68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c"} Jan 29 15:27:04 crc kubenswrapper[4620]: I0129 15:27:04.847132 4620 scope.go:117] "RemoveContainer" containerID="167b6c8fefd91c16d6187c62bc6ae99ac84e11ac864df648cfd287babfd6eed6" Jan 29 15:27:04 crc kubenswrapper[4620]: I0129 15:27:04.847948 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:27:04 crc kubenswrapper[4620]: E0129 15:27:04.848375 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:27:19 crc kubenswrapper[4620]: I0129 15:27:19.872971 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:27:19 crc kubenswrapper[4620]: E0129 15:27:19.873685 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:27:33 crc kubenswrapper[4620]: I0129 15:27:33.872332 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:27:33 crc kubenswrapper[4620]: E0129 15:27:33.873298 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:27:42 crc kubenswrapper[4620]: E0129 15:27:42.870920 4620 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 29 15:27:45 crc kubenswrapper[4620]: I0129 15:27:45.872642 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:27:45 crc kubenswrapper[4620]: E0129 15:27:45.873273 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:27:57 crc kubenswrapper[4620]: I0129 15:27:57.873399 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:27:57 crc kubenswrapper[4620]: E0129 15:27:57.874257 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:28:09 crc kubenswrapper[4620]: I0129 15:28:09.873124 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:28:09 crc kubenswrapper[4620]: E0129 15:28:09.875110 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:28:21 crc kubenswrapper[4620]: I0129 15:28:21.873077 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:28:21 crc kubenswrapper[4620]: E0129 15:28:21.874178 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:28:32 crc kubenswrapper[4620]: I0129 15:28:32.872441 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:28:32 crc kubenswrapper[4620]: E0129 15:28:32.873384 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:28:46 crc kubenswrapper[4620]: I0129 15:28:46.872805 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:28:46 crc kubenswrapper[4620]: E0129 15:28:46.873481 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:29:00 crc kubenswrapper[4620]: I0129 15:29:00.879945 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:29:00 crc kubenswrapper[4620]: E0129 15:29:00.880565 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:29:13 crc kubenswrapper[4620]: I0129 15:29:13.872413 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:29:13 crc kubenswrapper[4620]: E0129 15:29:13.873433 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:29:26 crc kubenswrapper[4620]: I0129 15:29:26.872830 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:29:26 crc kubenswrapper[4620]: E0129 15:29:26.873850 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:29:39 crc kubenswrapper[4620]: I0129 15:29:39.872177 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:29:39 crc kubenswrapper[4620]: E0129 15:29:39.872681 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:29:50 crc kubenswrapper[4620]: I0129 15:29:50.879840 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:29:50 crc kubenswrapper[4620]: E0129 15:29:50.880550 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.174978 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk"] Jan 29 15:30:00 crc kubenswrapper[4620]: E0129 15:30:00.178626 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" containerName="extract-content" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.178939 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" containerName="extract-content" Jan 29 15:30:00 crc kubenswrapper[4620]: E0129 15:30:00.179147 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" containerName="registry-server" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.182793 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" containerName="registry-server" Jan 29 15:30:00 crc kubenswrapper[4620]: E0129 15:30:00.182973 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" containerName="extract-utilities" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.183089 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" containerName="extract-utilities" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.183474 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe06b8b2-3256-4eb9-8456-dccfc8f61044" containerName="registry-server" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.184232 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk"] Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.184432 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.187846 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.188214 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.292644 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5098704-1898-464b-8909-86f1f0374a3c-config-volume\") pod \"collect-profiles-29495010-8fcqk\" (UID: \"b5098704-1898-464b-8909-86f1f0374a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.292812 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5098704-1898-464b-8909-86f1f0374a3c-secret-volume\") pod \"collect-profiles-29495010-8fcqk\" (UID: \"b5098704-1898-464b-8909-86f1f0374a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.292836 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvm6v\" (UniqueName: \"kubernetes.io/projected/b5098704-1898-464b-8909-86f1f0374a3c-kube-api-access-pvm6v\") pod \"collect-profiles-29495010-8fcqk\" (UID: \"b5098704-1898-464b-8909-86f1f0374a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.395070 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5098704-1898-464b-8909-86f1f0374a3c-config-volume\") pod \"collect-profiles-29495010-8fcqk\" (UID: \"b5098704-1898-464b-8909-86f1f0374a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.395473 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvm6v\" (UniqueName: \"kubernetes.io/projected/b5098704-1898-464b-8909-86f1f0374a3c-kube-api-access-pvm6v\") pod \"collect-profiles-29495010-8fcqk\" (UID: \"b5098704-1898-464b-8909-86f1f0374a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.395553 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5098704-1898-464b-8909-86f1f0374a3c-secret-volume\") pod \"collect-profiles-29495010-8fcqk\" (UID: \"b5098704-1898-464b-8909-86f1f0374a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.396340 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5098704-1898-464b-8909-86f1f0374a3c-config-volume\") pod \"collect-profiles-29495010-8fcqk\" (UID: \"b5098704-1898-464b-8909-86f1f0374a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.407921 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5098704-1898-464b-8909-86f1f0374a3c-secret-volume\") pod \"collect-profiles-29495010-8fcqk\" (UID: \"b5098704-1898-464b-8909-86f1f0374a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.417109 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvm6v\" (UniqueName: \"kubernetes.io/projected/b5098704-1898-464b-8909-86f1f0374a3c-kube-api-access-pvm6v\") pod \"collect-profiles-29495010-8fcqk\" (UID: \"b5098704-1898-464b-8909-86f1f0374a3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.513941 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" Jan 29 15:30:00 crc kubenswrapper[4620]: I0129 15:30:00.958995 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk"] Jan 29 15:30:01 crc kubenswrapper[4620]: I0129 15:30:01.269908 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" event={"ID":"b5098704-1898-464b-8909-86f1f0374a3c","Type":"ContainerStarted","Data":"5869e8cf5dbb48d23b06169cfb234fb625944215755280c9d56f7a246c9be1a8"} Jan 29 15:30:01 crc kubenswrapper[4620]: I0129 15:30:01.269970 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" event={"ID":"b5098704-1898-464b-8909-86f1f0374a3c","Type":"ContainerStarted","Data":"dc4c54d53ceffc3afa09f97247b7b296c751749aabf64815327ac0378b6fdda7"} Jan 29 15:30:01 crc kubenswrapper[4620]: I0129 15:30:01.300660 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" podStartSLOduration=1.300638507 podStartE2EDuration="1.300638507s" podCreationTimestamp="2026-01-29 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:01.296840337 +0000 UTC m=+1741.909668012" watchObservedRunningTime="2026-01-29 15:30:01.300638507 +0000 UTC m=+1741.913466172" Jan 29 15:30:02 crc kubenswrapper[4620]: I0129 15:30:02.281619 4620 generic.go:334] "Generic (PLEG): container finished" podID="b5098704-1898-464b-8909-86f1f0374a3c" containerID="5869e8cf5dbb48d23b06169cfb234fb625944215755280c9d56f7a246c9be1a8" exitCode=0 Jan 29 15:30:02 crc kubenswrapper[4620]: I0129 15:30:02.281686 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" event={"ID":"b5098704-1898-464b-8909-86f1f0374a3c","Type":"ContainerDied","Data":"5869e8cf5dbb48d23b06169cfb234fb625944215755280c9d56f7a246c9be1a8"} Jan 29 15:30:03 crc kubenswrapper[4620]: I0129 15:30:03.553284 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" Jan 29 15:30:03 crc kubenswrapper[4620]: I0129 15:30:03.646389 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5098704-1898-464b-8909-86f1f0374a3c-secret-volume\") pod \"b5098704-1898-464b-8909-86f1f0374a3c\" (UID: \"b5098704-1898-464b-8909-86f1f0374a3c\") " Jan 29 15:30:03 crc kubenswrapper[4620]: I0129 15:30:03.646502 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvm6v\" (UniqueName: \"kubernetes.io/projected/b5098704-1898-464b-8909-86f1f0374a3c-kube-api-access-pvm6v\") pod \"b5098704-1898-464b-8909-86f1f0374a3c\" (UID: \"b5098704-1898-464b-8909-86f1f0374a3c\") " Jan 29 15:30:03 crc kubenswrapper[4620]: I0129 15:30:03.646564 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5098704-1898-464b-8909-86f1f0374a3c-config-volume\") pod \"b5098704-1898-464b-8909-86f1f0374a3c\" (UID: \"b5098704-1898-464b-8909-86f1f0374a3c\") " Jan 29 15:30:03 crc kubenswrapper[4620]: I0129 15:30:03.647303 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5098704-1898-464b-8909-86f1f0374a3c-config-volume" (OuterVolumeSpecName: "config-volume") pod "b5098704-1898-464b-8909-86f1f0374a3c" (UID: "b5098704-1898-464b-8909-86f1f0374a3c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:03 crc kubenswrapper[4620]: I0129 15:30:03.651133 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5098704-1898-464b-8909-86f1f0374a3c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b5098704-1898-464b-8909-86f1f0374a3c" (UID: "b5098704-1898-464b-8909-86f1f0374a3c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:03 crc kubenswrapper[4620]: I0129 15:30:03.651268 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5098704-1898-464b-8909-86f1f0374a3c-kube-api-access-pvm6v" (OuterVolumeSpecName: "kube-api-access-pvm6v") pod "b5098704-1898-464b-8909-86f1f0374a3c" (UID: "b5098704-1898-464b-8909-86f1f0374a3c"). InnerVolumeSpecName "kube-api-access-pvm6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:03 crc kubenswrapper[4620]: I0129 15:30:03.748146 4620 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5098704-1898-464b-8909-86f1f0374a3c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:03 crc kubenswrapper[4620]: I0129 15:30:03.748207 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvm6v\" (UniqueName: \"kubernetes.io/projected/b5098704-1898-464b-8909-86f1f0374a3c-kube-api-access-pvm6v\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:03 crc kubenswrapper[4620]: I0129 15:30:03.748221 4620 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5098704-1898-464b-8909-86f1f0374a3c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:04 crc kubenswrapper[4620]: I0129 15:30:04.300154 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" event={"ID":"b5098704-1898-464b-8909-86f1f0374a3c","Type":"ContainerDied","Data":"dc4c54d53ceffc3afa09f97247b7b296c751749aabf64815327ac0378b6fdda7"} Jan 29 15:30:04 crc kubenswrapper[4620]: I0129 15:30:04.300311 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc4c54d53ceffc3afa09f97247b7b296c751749aabf64815327ac0378b6fdda7" Jan 29 15:30:04 crc kubenswrapper[4620]: I0129 15:30:04.300219 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-8fcqk" Jan 29 15:30:04 crc kubenswrapper[4620]: I0129 15:30:04.872496 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:30:04 crc kubenswrapper[4620]: E0129 15:30:04.873348 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:30:16 crc kubenswrapper[4620]: I0129 15:30:16.872931 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:30:16 crc kubenswrapper[4620]: E0129 15:30:16.874301 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:30:29 crc kubenswrapper[4620]: I0129 15:30:29.873283 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:30:29 crc kubenswrapper[4620]: E0129 15:30:29.873988 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:30:40 crc kubenswrapper[4620]: I0129 15:30:40.901230 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:30:40 crc kubenswrapper[4620]: E0129 15:30:40.903118 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:30:54 crc kubenswrapper[4620]: I0129 15:30:54.872673 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:30:54 crc kubenswrapper[4620]: E0129 15:30:54.873403 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:31:07 crc kubenswrapper[4620]: I0129 15:31:07.872836 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:31:07 crc kubenswrapper[4620]: E0129 15:31:07.873832 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:31:14 crc kubenswrapper[4620]: I0129 15:31:14.322979 4620 scope.go:117] "RemoveContainer" containerID="2b0b9f67bb5df08eb2eb3fcf91f26182df7bb0a341ddabe0061176bfc3774f6a" Jan 29 15:31:14 crc kubenswrapper[4620]: I0129 15:31:14.361573 4620 scope.go:117] "RemoveContainer" containerID="17099e5d94dfe6a45af2ca9b82165bb74d389dcfd7989bfdde1cbfe719d14248" Jan 29 15:31:14 crc kubenswrapper[4620]: I0129 15:31:14.400417 4620 scope.go:117] "RemoveContainer" containerID="ca6d3e9ae2cee48e9b7995f5708bc282697679d789532508b190d28b1b436452" Jan 29 15:31:19 crc kubenswrapper[4620]: I0129 15:31:19.873272 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:31:19 crc kubenswrapper[4620]: E0129 15:31:19.873556 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:31:34 crc kubenswrapper[4620]: I0129 15:31:34.872818 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:31:34 crc kubenswrapper[4620]: E0129 15:31:34.874576 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:31:48 crc kubenswrapper[4620]: I0129 15:31:48.872641 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:31:48 crc kubenswrapper[4620]: E0129 15:31:48.873239 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:31:59 crc kubenswrapper[4620]: I0129 15:31:59.872231 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:31:59 crc kubenswrapper[4620]: E0129 15:31:59.873914 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:32:11 crc kubenswrapper[4620]: I0129 15:32:11.872509 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:32:12 crc kubenswrapper[4620]: I0129 15:32:12.350053 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"c4481636fef9a26707f61498a2c56f8b28e0ec213403fd672455df1021d2e8d0"} Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.633800 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mhwtr"] Jan 29 15:32:22 crc kubenswrapper[4620]: E0129 15:32:22.634699 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5098704-1898-464b-8909-86f1f0374a3c" containerName="collect-profiles" Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.634713 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5098704-1898-464b-8909-86f1f0374a3c" containerName="collect-profiles" Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.634907 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5098704-1898-464b-8909-86f1f0374a3c" containerName="collect-profiles" Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.636045 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.649999 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhwtr"] Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.665137 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea02969-9297-4e18-9d76-89616a48cc87-utilities\") pod \"community-operators-mhwtr\" (UID: \"3ea02969-9297-4e18-9d76-89616a48cc87\") " pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.665500 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea02969-9297-4e18-9d76-89616a48cc87-catalog-content\") pod \"community-operators-mhwtr\" (UID: \"3ea02969-9297-4e18-9d76-89616a48cc87\") " pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.766945 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea02969-9297-4e18-9d76-89616a48cc87-utilities\") pod \"community-operators-mhwtr\" (UID: \"3ea02969-9297-4e18-9d76-89616a48cc87\") " pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.767000 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea02969-9297-4e18-9d76-89616a48cc87-catalog-content\") pod \"community-operators-mhwtr\" (UID: \"3ea02969-9297-4e18-9d76-89616a48cc87\") " pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.767047 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9vff\" (UniqueName: \"kubernetes.io/projected/3ea02969-9297-4e18-9d76-89616a48cc87-kube-api-access-b9vff\") pod \"community-operators-mhwtr\" (UID: \"3ea02969-9297-4e18-9d76-89616a48cc87\") " pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.767455 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea02969-9297-4e18-9d76-89616a48cc87-utilities\") pod \"community-operators-mhwtr\" (UID: \"3ea02969-9297-4e18-9d76-89616a48cc87\") " pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.767581 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea02969-9297-4e18-9d76-89616a48cc87-catalog-content\") pod \"community-operators-mhwtr\" (UID: \"3ea02969-9297-4e18-9d76-89616a48cc87\") " pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.868231 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9vff\" (UniqueName: \"kubernetes.io/projected/3ea02969-9297-4e18-9d76-89616a48cc87-kube-api-access-b9vff\") pod \"community-operators-mhwtr\" (UID: \"3ea02969-9297-4e18-9d76-89616a48cc87\") " pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.891361 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9vff\" (UniqueName: \"kubernetes.io/projected/3ea02969-9297-4e18-9d76-89616a48cc87-kube-api-access-b9vff\") pod \"community-operators-mhwtr\" (UID: \"3ea02969-9297-4e18-9d76-89616a48cc87\") " pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:22 crc kubenswrapper[4620]: I0129 15:32:22.972341 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:23 crc kubenswrapper[4620]: I0129 15:32:23.341952 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhwtr"] Jan 29 15:32:23 crc kubenswrapper[4620]: I0129 15:32:23.430914 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhwtr" event={"ID":"3ea02969-9297-4e18-9d76-89616a48cc87","Type":"ContainerStarted","Data":"b850e6e1f54efe7f5321ef4ebef15ce17dac78b48d3c25989a434b3a084084f0"} Jan 29 15:32:24 crc kubenswrapper[4620]: I0129 15:32:24.444855 4620 generic.go:334] "Generic (PLEG): container finished" podID="3ea02969-9297-4e18-9d76-89616a48cc87" containerID="cf06851a81cd63d280b4d288b1754df8b0530b7abffa64f6a7f81e6b7fc42f1b" exitCode=0 Jan 29 15:32:24 crc kubenswrapper[4620]: I0129 15:32:24.444977 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhwtr" event={"ID":"3ea02969-9297-4e18-9d76-89616a48cc87","Type":"ContainerDied","Data":"cf06851a81cd63d280b4d288b1754df8b0530b7abffa64f6a7f81e6b7fc42f1b"} Jan 29 15:32:24 crc kubenswrapper[4620]: I0129 15:32:24.451808 4620 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:32:26 crc kubenswrapper[4620]: I0129 15:32:26.466136 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhwtr" event={"ID":"3ea02969-9297-4e18-9d76-89616a48cc87","Type":"ContainerStarted","Data":"35983a3bc7d4ff41a5b74c424461dd540791058f1d986fbc2e6047fb4a38b5fd"} Jan 29 15:32:27 crc kubenswrapper[4620]: I0129 15:32:27.479374 4620 generic.go:334] "Generic (PLEG): container finished" podID="3ea02969-9297-4e18-9d76-89616a48cc87" containerID="35983a3bc7d4ff41a5b74c424461dd540791058f1d986fbc2e6047fb4a38b5fd" exitCode=0 Jan 29 15:32:27 crc kubenswrapper[4620]: I0129 15:32:27.479442 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhwtr" event={"ID":"3ea02969-9297-4e18-9d76-89616a48cc87","Type":"ContainerDied","Data":"35983a3bc7d4ff41a5b74c424461dd540791058f1d986fbc2e6047fb4a38b5fd"} Jan 29 15:32:28 crc kubenswrapper[4620]: I0129 15:32:28.486204 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhwtr" event={"ID":"3ea02969-9297-4e18-9d76-89616a48cc87","Type":"ContainerStarted","Data":"b89281c07a7222ee0c6ca0f95d6ad416f88d3b9d73f71f617cd013d78f8f8635"} Jan 29 15:32:28 crc kubenswrapper[4620]: I0129 15:32:28.503397 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mhwtr" podStartSLOduration=3.021925347 podStartE2EDuration="6.503375824s" podCreationTimestamp="2026-01-29 15:32:22 +0000 UTC" firstStartedPulling="2026-01-29 15:32:24.450846926 +0000 UTC m=+1885.063674611" lastFinishedPulling="2026-01-29 15:32:27.932297433 +0000 UTC m=+1888.545125088" observedRunningTime="2026-01-29 15:32:28.50288437 +0000 UTC m=+1889.115712035" watchObservedRunningTime="2026-01-29 15:32:28.503375824 +0000 UTC m=+1889.116203479" Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.015107 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ksgkn"] Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.017393 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.036275 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksgkn"] Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.180244 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnsqv\" (UniqueName: \"kubernetes.io/projected/c80155fc-d965-455f-9349-49a1c8eaec07-kube-api-access-pnsqv\") pod \"redhat-marketplace-ksgkn\" (UID: \"c80155fc-d965-455f-9349-49a1c8eaec07\") " pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.180672 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80155fc-d965-455f-9349-49a1c8eaec07-catalog-content\") pod \"redhat-marketplace-ksgkn\" (UID: \"c80155fc-d965-455f-9349-49a1c8eaec07\") " pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.180732 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80155fc-d965-455f-9349-49a1c8eaec07-utilities\") pod \"redhat-marketplace-ksgkn\" (UID: \"c80155fc-d965-455f-9349-49a1c8eaec07\") " pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.281259 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnsqv\" (UniqueName: \"kubernetes.io/projected/c80155fc-d965-455f-9349-49a1c8eaec07-kube-api-access-pnsqv\") pod \"redhat-marketplace-ksgkn\" (UID: \"c80155fc-d965-455f-9349-49a1c8eaec07\") " pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.281311 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80155fc-d965-455f-9349-49a1c8eaec07-catalog-content\") pod \"redhat-marketplace-ksgkn\" (UID: \"c80155fc-d965-455f-9349-49a1c8eaec07\") " pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.281340 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80155fc-d965-455f-9349-49a1c8eaec07-utilities\") pod \"redhat-marketplace-ksgkn\" (UID: \"c80155fc-d965-455f-9349-49a1c8eaec07\") " pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.282123 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80155fc-d965-455f-9349-49a1c8eaec07-utilities\") pod \"redhat-marketplace-ksgkn\" (UID: \"c80155fc-d965-455f-9349-49a1c8eaec07\") " pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.282230 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80155fc-d965-455f-9349-49a1c8eaec07-catalog-content\") pod \"redhat-marketplace-ksgkn\" (UID: \"c80155fc-d965-455f-9349-49a1c8eaec07\") " pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.323816 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnsqv\" (UniqueName: \"kubernetes.io/projected/c80155fc-d965-455f-9349-49a1c8eaec07-kube-api-access-pnsqv\") pod \"redhat-marketplace-ksgkn\" (UID: \"c80155fc-d965-455f-9349-49a1c8eaec07\") " pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.356107 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:32:30 crc kubenswrapper[4620]: I0129 15:32:30.883651 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksgkn"] Jan 29 15:32:31 crc kubenswrapper[4620]: I0129 15:32:31.507138 4620 generic.go:334] "Generic (PLEG): container finished" podID="c80155fc-d965-455f-9349-49a1c8eaec07" containerID="63a511c257c764b37c1183196cfce33849fbb1492ab80376ede1de58c3c7dfec" exitCode=0 Jan 29 15:32:31 crc kubenswrapper[4620]: I0129 15:32:31.507220 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksgkn" event={"ID":"c80155fc-d965-455f-9349-49a1c8eaec07","Type":"ContainerDied","Data":"63a511c257c764b37c1183196cfce33849fbb1492ab80376ede1de58c3c7dfec"} Jan 29 15:32:31 crc kubenswrapper[4620]: I0129 15:32:31.507297 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksgkn" event={"ID":"c80155fc-d965-455f-9349-49a1c8eaec07","Type":"ContainerStarted","Data":"9b7d5dfc6df54d447b6309b1e1497c7a112435566f62a08699d6bc39d5f27502"} Jan 29 15:32:31 crc kubenswrapper[4620]: E0129 15:32:31.698321 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:32:31 crc kubenswrapper[4620]: E0129 15:32:31.698498 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnsqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ksgkn_openshift-marketplace(c80155fc-d965-455f-9349-49a1c8eaec07): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:32:31 crc kubenswrapper[4620]: E0129 15:32:31.700585 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-ksgkn" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" Jan 29 15:32:32 crc kubenswrapper[4620]: E0129 15:32:32.516259 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ksgkn" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" Jan 29 15:32:32 crc kubenswrapper[4620]: I0129 15:32:32.972682 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:32 crc kubenswrapper[4620]: I0129 15:32:32.973067 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:33 crc kubenswrapper[4620]: I0129 15:32:33.040359 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:33 crc kubenswrapper[4620]: I0129 15:32:33.594914 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:35 crc kubenswrapper[4620]: I0129 15:32:35.208390 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhwtr"] Jan 29 15:32:36 crc kubenswrapper[4620]: I0129 15:32:36.548076 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mhwtr" podUID="3ea02969-9297-4e18-9d76-89616a48cc87" containerName="registry-server" containerID="cri-o://b89281c07a7222ee0c6ca0f95d6ad416f88d3b9d73f71f617cd013d78f8f8635" gracePeriod=2 Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.476265 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.489251 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea02969-9297-4e18-9d76-89616a48cc87-utilities\") pod \"3ea02969-9297-4e18-9d76-89616a48cc87\" (UID: \"3ea02969-9297-4e18-9d76-89616a48cc87\") " Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.489302 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9vff\" (UniqueName: \"kubernetes.io/projected/3ea02969-9297-4e18-9d76-89616a48cc87-kube-api-access-b9vff\") pod \"3ea02969-9297-4e18-9d76-89616a48cc87\" (UID: \"3ea02969-9297-4e18-9d76-89616a48cc87\") " Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.489418 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea02969-9297-4e18-9d76-89616a48cc87-catalog-content\") pod \"3ea02969-9297-4e18-9d76-89616a48cc87\" (UID: \"3ea02969-9297-4e18-9d76-89616a48cc87\") " Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.490442 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea02969-9297-4e18-9d76-89616a48cc87-utilities" (OuterVolumeSpecName: "utilities") pod "3ea02969-9297-4e18-9d76-89616a48cc87" (UID: "3ea02969-9297-4e18-9d76-89616a48cc87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.503864 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea02969-9297-4e18-9d76-89616a48cc87-kube-api-access-b9vff" (OuterVolumeSpecName: "kube-api-access-b9vff") pod "3ea02969-9297-4e18-9d76-89616a48cc87" (UID: "3ea02969-9297-4e18-9d76-89616a48cc87"). InnerVolumeSpecName "kube-api-access-b9vff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.552326 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea02969-9297-4e18-9d76-89616a48cc87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ea02969-9297-4e18-9d76-89616a48cc87" (UID: "3ea02969-9297-4e18-9d76-89616a48cc87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.557178 4620 generic.go:334] "Generic (PLEG): container finished" podID="3ea02969-9297-4e18-9d76-89616a48cc87" containerID="b89281c07a7222ee0c6ca0f95d6ad416f88d3b9d73f71f617cd013d78f8f8635" exitCode=0 Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.557237 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhwtr" event={"ID":"3ea02969-9297-4e18-9d76-89616a48cc87","Type":"ContainerDied","Data":"b89281c07a7222ee0c6ca0f95d6ad416f88d3b9d73f71f617cd013d78f8f8635"} Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.557278 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhwtr" event={"ID":"3ea02969-9297-4e18-9d76-89616a48cc87","Type":"ContainerDied","Data":"b850e6e1f54efe7f5321ef4ebef15ce17dac78b48d3c25989a434b3a084084f0"} Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.557262 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhwtr" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.557303 4620 scope.go:117] "RemoveContainer" containerID="b89281c07a7222ee0c6ca0f95d6ad416f88d3b9d73f71f617cd013d78f8f8635" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.574467 4620 scope.go:117] "RemoveContainer" containerID="35983a3bc7d4ff41a5b74c424461dd540791058f1d986fbc2e6047fb4a38b5fd" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.592993 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhwtr"] Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.594678 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea02969-9297-4e18-9d76-89616a48cc87-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.594712 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea02969-9297-4e18-9d76-89616a48cc87-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.594724 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9vff\" (UniqueName: \"kubernetes.io/projected/3ea02969-9297-4e18-9d76-89616a48cc87-kube-api-access-b9vff\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.598325 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mhwtr"] Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.610249 4620 scope.go:117] "RemoveContainer" containerID="cf06851a81cd63d280b4d288b1754df8b0530b7abffa64f6a7f81e6b7fc42f1b" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.623034 4620 scope.go:117] "RemoveContainer" containerID="b89281c07a7222ee0c6ca0f95d6ad416f88d3b9d73f71f617cd013d78f8f8635" Jan 29 15:32:37 crc kubenswrapper[4620]: E0129 15:32:37.623481 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b89281c07a7222ee0c6ca0f95d6ad416f88d3b9d73f71f617cd013d78f8f8635\": container with ID starting with b89281c07a7222ee0c6ca0f95d6ad416f88d3b9d73f71f617cd013d78f8f8635 not found: ID does not exist" containerID="b89281c07a7222ee0c6ca0f95d6ad416f88d3b9d73f71f617cd013d78f8f8635" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.623516 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b89281c07a7222ee0c6ca0f95d6ad416f88d3b9d73f71f617cd013d78f8f8635"} err="failed to get container status \"b89281c07a7222ee0c6ca0f95d6ad416f88d3b9d73f71f617cd013d78f8f8635\": rpc error: code = NotFound desc = could not find container \"b89281c07a7222ee0c6ca0f95d6ad416f88d3b9d73f71f617cd013d78f8f8635\": container with ID starting with b89281c07a7222ee0c6ca0f95d6ad416f88d3b9d73f71f617cd013d78f8f8635 not found: ID does not exist" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.623537 4620 scope.go:117] "RemoveContainer" containerID="35983a3bc7d4ff41a5b74c424461dd540791058f1d986fbc2e6047fb4a38b5fd" Jan 29 15:32:37 crc kubenswrapper[4620]: E0129 15:32:37.623806 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35983a3bc7d4ff41a5b74c424461dd540791058f1d986fbc2e6047fb4a38b5fd\": container with ID starting with 35983a3bc7d4ff41a5b74c424461dd540791058f1d986fbc2e6047fb4a38b5fd not found: ID does not exist" containerID="35983a3bc7d4ff41a5b74c424461dd540791058f1d986fbc2e6047fb4a38b5fd" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.623889 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35983a3bc7d4ff41a5b74c424461dd540791058f1d986fbc2e6047fb4a38b5fd"} err="failed to get container status \"35983a3bc7d4ff41a5b74c424461dd540791058f1d986fbc2e6047fb4a38b5fd\": rpc error: code = NotFound desc = could not find container \"35983a3bc7d4ff41a5b74c424461dd540791058f1d986fbc2e6047fb4a38b5fd\": container with ID starting with 35983a3bc7d4ff41a5b74c424461dd540791058f1d986fbc2e6047fb4a38b5fd not found: ID does not exist" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.623909 4620 scope.go:117] "RemoveContainer" containerID="cf06851a81cd63d280b4d288b1754df8b0530b7abffa64f6a7f81e6b7fc42f1b" Jan 29 15:32:37 crc kubenswrapper[4620]: E0129 15:32:37.624192 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf06851a81cd63d280b4d288b1754df8b0530b7abffa64f6a7f81e6b7fc42f1b\": container with ID starting with cf06851a81cd63d280b4d288b1754df8b0530b7abffa64f6a7f81e6b7fc42f1b not found: ID does not exist" containerID="cf06851a81cd63d280b4d288b1754df8b0530b7abffa64f6a7f81e6b7fc42f1b" Jan 29 15:32:37 crc kubenswrapper[4620]: I0129 15:32:37.624230 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf06851a81cd63d280b4d288b1754df8b0530b7abffa64f6a7f81e6b7fc42f1b"} err="failed to get container status \"cf06851a81cd63d280b4d288b1754df8b0530b7abffa64f6a7f81e6b7fc42f1b\": rpc error: code = NotFound desc = could not find container \"cf06851a81cd63d280b4d288b1754df8b0530b7abffa64f6a7f81e6b7fc42f1b\": container with ID starting with cf06851a81cd63d280b4d288b1754df8b0530b7abffa64f6a7f81e6b7fc42f1b not found: ID does not exist" Jan 29 15:32:38 crc kubenswrapper[4620]: I0129 15:32:38.890913 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea02969-9297-4e18-9d76-89616a48cc87" path="/var/lib/kubelet/pods/3ea02969-9297-4e18-9d76-89616a48cc87/volumes" Jan 29 15:32:44 crc kubenswrapper[4620]: E0129 15:32:44.003419 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:32:44 crc kubenswrapper[4620]: E0129 15:32:44.003874 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnsqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ksgkn_openshift-marketplace(c80155fc-d965-455f-9349-49a1c8eaec07): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:32:44 crc kubenswrapper[4620]: E0129 15:32:44.005223 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-ksgkn" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.097309 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vg6bb"] Jan 29 15:32:46 crc kubenswrapper[4620]: E0129 15:32:46.098048 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea02969-9297-4e18-9d76-89616a48cc87" containerName="extract-utilities" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.098069 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea02969-9297-4e18-9d76-89616a48cc87" containerName="extract-utilities" Jan 29 15:32:46 crc kubenswrapper[4620]: E0129 15:32:46.098090 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea02969-9297-4e18-9d76-89616a48cc87" containerName="registry-server" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.098102 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea02969-9297-4e18-9d76-89616a48cc87" containerName="registry-server" Jan 29 15:32:46 crc kubenswrapper[4620]: E0129 15:32:46.098128 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea02969-9297-4e18-9d76-89616a48cc87" containerName="extract-content" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.098144 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea02969-9297-4e18-9d76-89616a48cc87" containerName="extract-content" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.098399 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea02969-9297-4e18-9d76-89616a48cc87" containerName="registry-server" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.101165 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.118619 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vg6bb"] Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.231946 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297df0d0-8334-4c6e-8b11-7eb77e360dc1-utilities\") pod \"certified-operators-vg6bb\" (UID: \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\") " pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.232029 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m244d\" (UniqueName: \"kubernetes.io/projected/297df0d0-8334-4c6e-8b11-7eb77e360dc1-kube-api-access-m244d\") pod \"certified-operators-vg6bb\" (UID: \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\") " pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.232061 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297df0d0-8334-4c6e-8b11-7eb77e360dc1-catalog-content\") pod \"certified-operators-vg6bb\" (UID: \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\") " pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.333079 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m244d\" (UniqueName: \"kubernetes.io/projected/297df0d0-8334-4c6e-8b11-7eb77e360dc1-kube-api-access-m244d\") pod \"certified-operators-vg6bb\" (UID: \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\") " pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.333144 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297df0d0-8334-4c6e-8b11-7eb77e360dc1-catalog-content\") pod \"certified-operators-vg6bb\" (UID: \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\") " pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.333271 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297df0d0-8334-4c6e-8b11-7eb77e360dc1-utilities\") pod \"certified-operators-vg6bb\" (UID: \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\") " pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.333606 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297df0d0-8334-4c6e-8b11-7eb77e360dc1-catalog-content\") pod \"certified-operators-vg6bb\" (UID: \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\") " pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.333801 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297df0d0-8334-4c6e-8b11-7eb77e360dc1-utilities\") pod \"certified-operators-vg6bb\" (UID: \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\") " pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.353459 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m244d\" (UniqueName: \"kubernetes.io/projected/297df0d0-8334-4c6e-8b11-7eb77e360dc1-kube-api-access-m244d\") pod \"certified-operators-vg6bb\" (UID: \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\") " pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.429043 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:32:46 crc kubenswrapper[4620]: I0129 15:32:46.871406 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vg6bb"] Jan 29 15:32:46 crc kubenswrapper[4620]: W0129 15:32:46.874433 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod297df0d0_8334_4c6e_8b11_7eb77e360dc1.slice/crio-9d880ff92eac4b815e205a2e3db2d73f9a20bb1f6c2563a338a428410e2529f5 WatchSource:0}: Error finding container 9d880ff92eac4b815e205a2e3db2d73f9a20bb1f6c2563a338a428410e2529f5: Status 404 returned error can't find the container with id 9d880ff92eac4b815e205a2e3db2d73f9a20bb1f6c2563a338a428410e2529f5 Jan 29 15:32:47 crc kubenswrapper[4620]: I0129 15:32:47.650879 4620 generic.go:334] "Generic (PLEG): container finished" podID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" containerID="56b6891dcbb1b37298b5a96448a92af05c8ccf359e8798f28b8e4f5ba9003822" exitCode=0 Jan 29 15:32:47 crc kubenswrapper[4620]: I0129 15:32:47.650948 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6bb" event={"ID":"297df0d0-8334-4c6e-8b11-7eb77e360dc1","Type":"ContainerDied","Data":"56b6891dcbb1b37298b5a96448a92af05c8ccf359e8798f28b8e4f5ba9003822"} Jan 29 15:32:47 crc kubenswrapper[4620]: I0129 15:32:47.650987 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6bb" event={"ID":"297df0d0-8334-4c6e-8b11-7eb77e360dc1","Type":"ContainerStarted","Data":"9d880ff92eac4b815e205a2e3db2d73f9a20bb1f6c2563a338a428410e2529f5"} Jan 29 15:32:47 crc kubenswrapper[4620]: E0129 15:32:47.790882 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:32:47 crc kubenswrapper[4620]: E0129 15:32:47.791113 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m244d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vg6bb_openshift-marketplace(297df0d0-8334-4c6e-8b11-7eb77e360dc1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:32:47 crc kubenswrapper[4620]: E0129 15:32:47.792367 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-vg6bb" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" Jan 29 15:32:48 crc kubenswrapper[4620]: E0129 15:32:48.664031 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vg6bb" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" Jan 29 15:32:55 crc kubenswrapper[4620]: E0129 15:32:55.875025 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ksgkn" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" Jan 29 15:33:02 crc kubenswrapper[4620]: E0129 15:33:02.012877 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:33:02 crc kubenswrapper[4620]: E0129 15:33:02.013703 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m244d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vg6bb_openshift-marketplace(297df0d0-8334-4c6e-8b11-7eb77e360dc1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:33:02 crc kubenswrapper[4620]: E0129 15:33:02.014926 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-vg6bb" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" Jan 29 15:33:08 crc kubenswrapper[4620]: E0129 15:33:08.051591 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:33:08 crc kubenswrapper[4620]: E0129 15:33:08.052385 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnsqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ksgkn_openshift-marketplace(c80155fc-d965-455f-9349-49a1c8eaec07): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:33:08 crc kubenswrapper[4620]: E0129 15:33:08.053670 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-ksgkn" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" Jan 29 15:33:12 crc kubenswrapper[4620]: E0129 15:33:12.878606 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vg6bb" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" Jan 29 15:33:23 crc kubenswrapper[4620]: E0129 15:33:23.876808 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ksgkn" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" Jan 29 15:33:26 crc kubenswrapper[4620]: E0129 15:33:26.001838 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:33:26 crc kubenswrapper[4620]: E0129 15:33:26.002101 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m244d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vg6bb_openshift-marketplace(297df0d0-8334-4c6e-8b11-7eb77e360dc1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:33:26 crc kubenswrapper[4620]: E0129 15:33:26.003705 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-vg6bb" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" Jan 29 15:33:37 crc kubenswrapper[4620]: E0129 15:33:37.876132 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ksgkn" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" Jan 29 15:33:37 crc kubenswrapper[4620]: E0129 15:33:37.878160 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vg6bb" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" Jan 29 15:33:48 crc kubenswrapper[4620]: E0129 15:33:48.876857 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vg6bb" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" Jan 29 15:33:53 crc kubenswrapper[4620]: I0129 15:33:53.238670 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksgkn" event={"ID":"c80155fc-d965-455f-9349-49a1c8eaec07","Type":"ContainerStarted","Data":"72b0db09be315b68a42c1acd56b9ac225d766f887fb6f1e9f4c11e7f20bfa116"} Jan 29 15:33:54 crc kubenswrapper[4620]: I0129 15:33:54.247197 4620 generic.go:334] "Generic (PLEG): container finished" podID="c80155fc-d965-455f-9349-49a1c8eaec07" containerID="72b0db09be315b68a42c1acd56b9ac225d766f887fb6f1e9f4c11e7f20bfa116" exitCode=0 Jan 29 15:33:54 crc kubenswrapper[4620]: I0129 15:33:54.247264 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksgkn" event={"ID":"c80155fc-d965-455f-9349-49a1c8eaec07","Type":"ContainerDied","Data":"72b0db09be315b68a42c1acd56b9ac225d766f887fb6f1e9f4c11e7f20bfa116"} Jan 29 15:33:55 crc kubenswrapper[4620]: I0129 15:33:55.256232 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksgkn" event={"ID":"c80155fc-d965-455f-9349-49a1c8eaec07","Type":"ContainerStarted","Data":"878924a26e5c25aac9c683b6a703411d9c979770faf3a0cb95610b6bf0934300"} Jan 29 15:33:55 crc kubenswrapper[4620]: I0129 15:33:55.277324 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ksgkn" podStartSLOduration=2.985502533 podStartE2EDuration="1m26.277304676s" podCreationTimestamp="2026-01-29 15:32:29 +0000 UTC" firstStartedPulling="2026-01-29 15:32:31.509337812 +0000 UTC m=+1892.122165497" lastFinishedPulling="2026-01-29 15:33:54.801139985 +0000 UTC m=+1975.413967640" observedRunningTime="2026-01-29 15:33:55.273412604 +0000 UTC m=+1975.886240269" watchObservedRunningTime="2026-01-29 15:33:55.277304676 +0000 UTC m=+1975.890132321" Jan 29 15:34:00 crc kubenswrapper[4620]: I0129 15:34:00.357224 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:34:00 crc kubenswrapper[4620]: I0129 15:34:00.357591 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:34:00 crc kubenswrapper[4620]: I0129 15:34:00.409200 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:34:00 crc kubenswrapper[4620]: E0129 15:34:00.880968 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vg6bb" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" Jan 29 15:34:01 crc kubenswrapper[4620]: I0129 15:34:01.341421 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:34:01 crc kubenswrapper[4620]: I0129 15:34:01.383462 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksgkn"] Jan 29 15:34:03 crc kubenswrapper[4620]: I0129 15:34:03.315106 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ksgkn" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" containerName="registry-server" containerID="cri-o://878924a26e5c25aac9c683b6a703411d9c979770faf3a0cb95610b6bf0934300" gracePeriod=2 Jan 29 15:34:03 crc kubenswrapper[4620]: I0129 15:34:03.710991 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:34:03 crc kubenswrapper[4620]: I0129 15:34:03.765101 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80155fc-d965-455f-9349-49a1c8eaec07-catalog-content\") pod \"c80155fc-d965-455f-9349-49a1c8eaec07\" (UID: \"c80155fc-d965-455f-9349-49a1c8eaec07\") " Jan 29 15:34:03 crc kubenswrapper[4620]: I0129 15:34:03.765165 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80155fc-d965-455f-9349-49a1c8eaec07-utilities\") pod \"c80155fc-d965-455f-9349-49a1c8eaec07\" (UID: \"c80155fc-d965-455f-9349-49a1c8eaec07\") " Jan 29 15:34:03 crc kubenswrapper[4620]: I0129 15:34:03.765208 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnsqv\" (UniqueName: \"kubernetes.io/projected/c80155fc-d965-455f-9349-49a1c8eaec07-kube-api-access-pnsqv\") pod \"c80155fc-d965-455f-9349-49a1c8eaec07\" (UID: \"c80155fc-d965-455f-9349-49a1c8eaec07\") " Jan 29 15:34:03 crc kubenswrapper[4620]: I0129 15:34:03.765990 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c80155fc-d965-455f-9349-49a1c8eaec07-utilities" (OuterVolumeSpecName: "utilities") pod "c80155fc-d965-455f-9349-49a1c8eaec07" (UID: "c80155fc-d965-455f-9349-49a1c8eaec07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:03 crc kubenswrapper[4620]: I0129 15:34:03.770989 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80155fc-d965-455f-9349-49a1c8eaec07-kube-api-access-pnsqv" (OuterVolumeSpecName: "kube-api-access-pnsqv") pod "c80155fc-d965-455f-9349-49a1c8eaec07" (UID: "c80155fc-d965-455f-9349-49a1c8eaec07"). InnerVolumeSpecName "kube-api-access-pnsqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:03 crc kubenswrapper[4620]: I0129 15:34:03.792396 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c80155fc-d965-455f-9349-49a1c8eaec07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c80155fc-d965-455f-9349-49a1c8eaec07" (UID: "c80155fc-d965-455f-9349-49a1c8eaec07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:03 crc kubenswrapper[4620]: I0129 15:34:03.866452 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80155fc-d965-455f-9349-49a1c8eaec07-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:03 crc kubenswrapper[4620]: I0129 15:34:03.866497 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnsqv\" (UniqueName: \"kubernetes.io/projected/c80155fc-d965-455f-9349-49a1c8eaec07-kube-api-access-pnsqv\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:03 crc kubenswrapper[4620]: I0129 15:34:03.866514 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80155fc-d965-455f-9349-49a1c8eaec07-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.326400 4620 generic.go:334] "Generic (PLEG): container finished" podID="c80155fc-d965-455f-9349-49a1c8eaec07" containerID="878924a26e5c25aac9c683b6a703411d9c979770faf3a0cb95610b6bf0934300" exitCode=0 Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.326500 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ksgkn" Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.326546 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksgkn" event={"ID":"c80155fc-d965-455f-9349-49a1c8eaec07","Type":"ContainerDied","Data":"878924a26e5c25aac9c683b6a703411d9c979770faf3a0cb95610b6bf0934300"} Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.327083 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ksgkn" event={"ID":"c80155fc-d965-455f-9349-49a1c8eaec07","Type":"ContainerDied","Data":"9b7d5dfc6df54d447b6309b1e1497c7a112435566f62a08699d6bc39d5f27502"} Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.327128 4620 scope.go:117] "RemoveContainer" containerID="878924a26e5c25aac9c683b6a703411d9c979770faf3a0cb95610b6bf0934300" Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.369180 4620 scope.go:117] "RemoveContainer" containerID="72b0db09be315b68a42c1acd56b9ac225d766f887fb6f1e9f4c11e7f20bfa116" Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.389564 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksgkn"] Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.393351 4620 scope.go:117] "RemoveContainer" containerID="63a511c257c764b37c1183196cfce33849fbb1492ab80376ede1de58c3c7dfec" Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.401068 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ksgkn"] Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.424578 4620 scope.go:117] "RemoveContainer" containerID="878924a26e5c25aac9c683b6a703411d9c979770faf3a0cb95610b6bf0934300" Jan 29 15:34:04 crc kubenswrapper[4620]: E0129 15:34:04.425143 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"878924a26e5c25aac9c683b6a703411d9c979770faf3a0cb95610b6bf0934300\": container with ID starting with 878924a26e5c25aac9c683b6a703411d9c979770faf3a0cb95610b6bf0934300 not found: ID does not exist" containerID="878924a26e5c25aac9c683b6a703411d9c979770faf3a0cb95610b6bf0934300" Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.425296 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"878924a26e5c25aac9c683b6a703411d9c979770faf3a0cb95610b6bf0934300"} err="failed to get container status \"878924a26e5c25aac9c683b6a703411d9c979770faf3a0cb95610b6bf0934300\": rpc error: code = NotFound desc = could not find container \"878924a26e5c25aac9c683b6a703411d9c979770faf3a0cb95610b6bf0934300\": container with ID starting with 878924a26e5c25aac9c683b6a703411d9c979770faf3a0cb95610b6bf0934300 not found: ID does not exist" Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.425420 4620 scope.go:117] "RemoveContainer" containerID="72b0db09be315b68a42c1acd56b9ac225d766f887fb6f1e9f4c11e7f20bfa116" Jan 29 15:34:04 crc kubenswrapper[4620]: E0129 15:34:04.426193 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72b0db09be315b68a42c1acd56b9ac225d766f887fb6f1e9f4c11e7f20bfa116\": container with ID starting with 72b0db09be315b68a42c1acd56b9ac225d766f887fb6f1e9f4c11e7f20bfa116 not found: ID does not exist" containerID="72b0db09be315b68a42c1acd56b9ac225d766f887fb6f1e9f4c11e7f20bfa116" Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.426230 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72b0db09be315b68a42c1acd56b9ac225d766f887fb6f1e9f4c11e7f20bfa116"} err="failed to get container status \"72b0db09be315b68a42c1acd56b9ac225d766f887fb6f1e9f4c11e7f20bfa116\": rpc error: code = NotFound desc = could not find container \"72b0db09be315b68a42c1acd56b9ac225d766f887fb6f1e9f4c11e7f20bfa116\": container with ID starting with 72b0db09be315b68a42c1acd56b9ac225d766f887fb6f1e9f4c11e7f20bfa116 not found: ID does not exist" Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.426260 4620 scope.go:117] "RemoveContainer" containerID="63a511c257c764b37c1183196cfce33849fbb1492ab80376ede1de58c3c7dfec" Jan 29 15:34:04 crc kubenswrapper[4620]: E0129 15:34:04.426704 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63a511c257c764b37c1183196cfce33849fbb1492ab80376ede1de58c3c7dfec\": container with ID starting with 63a511c257c764b37c1183196cfce33849fbb1492ab80376ede1de58c3c7dfec not found: ID does not exist" containerID="63a511c257c764b37c1183196cfce33849fbb1492ab80376ede1de58c3c7dfec" Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.426834 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63a511c257c764b37c1183196cfce33849fbb1492ab80376ede1de58c3c7dfec"} err="failed to get container status \"63a511c257c764b37c1183196cfce33849fbb1492ab80376ede1de58c3c7dfec\": rpc error: code = NotFound desc = could not find container \"63a511c257c764b37c1183196cfce33849fbb1492ab80376ede1de58c3c7dfec\": container with ID starting with 63a511c257c764b37c1183196cfce33849fbb1492ab80376ede1de58c3c7dfec not found: ID does not exist" Jan 29 15:34:04 crc kubenswrapper[4620]: I0129 15:34:04.888256 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" path="/var/lib/kubelet/pods/c80155fc-d965-455f-9349-49a1c8eaec07/volumes" Jan 29 15:34:14 crc kubenswrapper[4620]: I0129 15:34:14.421221 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6bb" event={"ID":"297df0d0-8334-4c6e-8b11-7eb77e360dc1","Type":"ContainerStarted","Data":"12a4749b9305136b456f97cc72880d20725e137cae5832124a193bec8175c2b8"} Jan 29 15:34:16 crc kubenswrapper[4620]: I0129 15:34:16.437611 4620 generic.go:334] "Generic (PLEG): container finished" podID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" containerID="12a4749b9305136b456f97cc72880d20725e137cae5832124a193bec8175c2b8" exitCode=0 Jan 29 15:34:16 crc kubenswrapper[4620]: I0129 15:34:16.437664 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6bb" event={"ID":"297df0d0-8334-4c6e-8b11-7eb77e360dc1","Type":"ContainerDied","Data":"12a4749b9305136b456f97cc72880d20725e137cae5832124a193bec8175c2b8"} Jan 29 15:34:17 crc kubenswrapper[4620]: I0129 15:34:17.471309 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6bb" event={"ID":"297df0d0-8334-4c6e-8b11-7eb77e360dc1","Type":"ContainerStarted","Data":"dfc6951cafae287bddec4281bb4ccba79dea1c40af3af92c307b82ceb9b7d85d"} Jan 29 15:34:17 crc kubenswrapper[4620]: I0129 15:34:17.493883 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vg6bb" podStartSLOduration=2.314395615 podStartE2EDuration="1m31.493861541s" podCreationTimestamp="2026-01-29 15:32:46 +0000 UTC" firstStartedPulling="2026-01-29 15:32:47.653914336 +0000 UTC m=+1908.266742011" lastFinishedPulling="2026-01-29 15:34:16.833380292 +0000 UTC m=+1997.446207937" observedRunningTime="2026-01-29 15:34:17.489873587 +0000 UTC m=+1998.102701242" watchObservedRunningTime="2026-01-29 15:34:17.493861541 +0000 UTC m=+1998.106689196" Jan 29 15:34:26 crc kubenswrapper[4620]: I0129 15:34:26.429529 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:34:26 crc kubenswrapper[4620]: I0129 15:34:26.431255 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:34:26 crc kubenswrapper[4620]: I0129 15:34:26.493587 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:34:26 crc kubenswrapper[4620]: I0129 15:34:26.594537 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:34:26 crc kubenswrapper[4620]: I0129 15:34:26.742076 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vg6bb"] Jan 29 15:34:28 crc kubenswrapper[4620]: I0129 15:34:28.547910 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vg6bb" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" containerName="registry-server" containerID="cri-o://dfc6951cafae287bddec4281bb4ccba79dea1c40af3af92c307b82ceb9b7d85d" gracePeriod=2 Jan 29 15:34:28 crc kubenswrapper[4620]: I0129 15:34:28.927592 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.051555 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297df0d0-8334-4c6e-8b11-7eb77e360dc1-catalog-content\") pod \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\" (UID: \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\") " Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.051631 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297df0d0-8334-4c6e-8b11-7eb77e360dc1-utilities\") pod \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\" (UID: \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\") " Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.051720 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m244d\" (UniqueName: \"kubernetes.io/projected/297df0d0-8334-4c6e-8b11-7eb77e360dc1-kube-api-access-m244d\") pod \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\" (UID: \"297df0d0-8334-4c6e-8b11-7eb77e360dc1\") " Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.052406 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/297df0d0-8334-4c6e-8b11-7eb77e360dc1-utilities" (OuterVolumeSpecName: "utilities") pod "297df0d0-8334-4c6e-8b11-7eb77e360dc1" (UID: "297df0d0-8334-4c6e-8b11-7eb77e360dc1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.057340 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297df0d0-8334-4c6e-8b11-7eb77e360dc1-kube-api-access-m244d" (OuterVolumeSpecName: "kube-api-access-m244d") pod "297df0d0-8334-4c6e-8b11-7eb77e360dc1" (UID: "297df0d0-8334-4c6e-8b11-7eb77e360dc1"). InnerVolumeSpecName "kube-api-access-m244d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.104729 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/297df0d0-8334-4c6e-8b11-7eb77e360dc1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "297df0d0-8334-4c6e-8b11-7eb77e360dc1" (UID: "297df0d0-8334-4c6e-8b11-7eb77e360dc1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.153066 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m244d\" (UniqueName: \"kubernetes.io/projected/297df0d0-8334-4c6e-8b11-7eb77e360dc1-kube-api-access-m244d\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.153113 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297df0d0-8334-4c6e-8b11-7eb77e360dc1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.153128 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297df0d0-8334-4c6e-8b11-7eb77e360dc1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.555481 4620 generic.go:334] "Generic (PLEG): container finished" podID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" containerID="dfc6951cafae287bddec4281bb4ccba79dea1c40af3af92c307b82ceb9b7d85d" exitCode=0 Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.555554 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vg6bb" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.555570 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6bb" event={"ID":"297df0d0-8334-4c6e-8b11-7eb77e360dc1","Type":"ContainerDied","Data":"dfc6951cafae287bddec4281bb4ccba79dea1c40af3af92c307b82ceb9b7d85d"} Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.556156 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vg6bb" event={"ID":"297df0d0-8334-4c6e-8b11-7eb77e360dc1","Type":"ContainerDied","Data":"9d880ff92eac4b815e205a2e3db2d73f9a20bb1f6c2563a338a428410e2529f5"} Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.556175 4620 scope.go:117] "RemoveContainer" containerID="dfc6951cafae287bddec4281bb4ccba79dea1c40af3af92c307b82ceb9b7d85d" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.584036 4620 scope.go:117] "RemoveContainer" containerID="12a4749b9305136b456f97cc72880d20725e137cae5832124a193bec8175c2b8" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.593678 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vg6bb"] Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.599845 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vg6bb"] Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.614577 4620 scope.go:117] "RemoveContainer" containerID="56b6891dcbb1b37298b5a96448a92af05c8ccf359e8798f28b8e4f5ba9003822" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.639884 4620 scope.go:117] "RemoveContainer" containerID="dfc6951cafae287bddec4281bb4ccba79dea1c40af3af92c307b82ceb9b7d85d" Jan 29 15:34:29 crc kubenswrapper[4620]: E0129 15:34:29.640376 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfc6951cafae287bddec4281bb4ccba79dea1c40af3af92c307b82ceb9b7d85d\": container with ID starting with dfc6951cafae287bddec4281bb4ccba79dea1c40af3af92c307b82ceb9b7d85d not found: ID does not exist" containerID="dfc6951cafae287bddec4281bb4ccba79dea1c40af3af92c307b82ceb9b7d85d" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.640443 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfc6951cafae287bddec4281bb4ccba79dea1c40af3af92c307b82ceb9b7d85d"} err="failed to get container status \"dfc6951cafae287bddec4281bb4ccba79dea1c40af3af92c307b82ceb9b7d85d\": rpc error: code = NotFound desc = could not find container \"dfc6951cafae287bddec4281bb4ccba79dea1c40af3af92c307b82ceb9b7d85d\": container with ID starting with dfc6951cafae287bddec4281bb4ccba79dea1c40af3af92c307b82ceb9b7d85d not found: ID does not exist" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.640473 4620 scope.go:117] "RemoveContainer" containerID="12a4749b9305136b456f97cc72880d20725e137cae5832124a193bec8175c2b8" Jan 29 15:34:29 crc kubenswrapper[4620]: E0129 15:34:29.640894 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12a4749b9305136b456f97cc72880d20725e137cae5832124a193bec8175c2b8\": container with ID starting with 12a4749b9305136b456f97cc72880d20725e137cae5832124a193bec8175c2b8 not found: ID does not exist" containerID="12a4749b9305136b456f97cc72880d20725e137cae5832124a193bec8175c2b8" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.640924 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12a4749b9305136b456f97cc72880d20725e137cae5832124a193bec8175c2b8"} err="failed to get container status \"12a4749b9305136b456f97cc72880d20725e137cae5832124a193bec8175c2b8\": rpc error: code = NotFound desc = could not find container \"12a4749b9305136b456f97cc72880d20725e137cae5832124a193bec8175c2b8\": container with ID starting with 12a4749b9305136b456f97cc72880d20725e137cae5832124a193bec8175c2b8 not found: ID does not exist" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.640944 4620 scope.go:117] "RemoveContainer" containerID="56b6891dcbb1b37298b5a96448a92af05c8ccf359e8798f28b8e4f5ba9003822" Jan 29 15:34:29 crc kubenswrapper[4620]: E0129 15:34:29.641302 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56b6891dcbb1b37298b5a96448a92af05c8ccf359e8798f28b8e4f5ba9003822\": container with ID starting with 56b6891dcbb1b37298b5a96448a92af05c8ccf359e8798f28b8e4f5ba9003822 not found: ID does not exist" containerID="56b6891dcbb1b37298b5a96448a92af05c8ccf359e8798f28b8e4f5ba9003822" Jan 29 15:34:29 crc kubenswrapper[4620]: I0129 15:34:29.641331 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b6891dcbb1b37298b5a96448a92af05c8ccf359e8798f28b8e4f5ba9003822"} err="failed to get container status \"56b6891dcbb1b37298b5a96448a92af05c8ccf359e8798f28b8e4f5ba9003822\": rpc error: code = NotFound desc = could not find container \"56b6891dcbb1b37298b5a96448a92af05c8ccf359e8798f28b8e4f5ba9003822\": container with ID starting with 56b6891dcbb1b37298b5a96448a92af05c8ccf359e8798f28b8e4f5ba9003822 not found: ID does not exist" Jan 29 15:34:30 crc kubenswrapper[4620]: I0129 15:34:30.893361 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" path="/var/lib/kubelet/pods/297df0d0-8334-4c6e-8b11-7eb77e360dc1/volumes" Jan 29 15:34:34 crc kubenswrapper[4620]: I0129 15:34:34.111038 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:34:34 crc kubenswrapper[4620]: I0129 15:34:34.111401 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:35:04 crc kubenswrapper[4620]: I0129 15:35:04.111184 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:35:04 crc kubenswrapper[4620]: I0129 15:35:04.112005 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:35:16 crc kubenswrapper[4620]: I0129 15:35:16.891935 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fdf62"] Jan 29 15:35:16 crc kubenswrapper[4620]: E0129 15:35:16.892494 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" containerName="extract-content" Jan 29 15:35:16 crc kubenswrapper[4620]: I0129 15:35:16.892506 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" containerName="extract-content" Jan 29 15:35:16 crc kubenswrapper[4620]: E0129 15:35:16.892530 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" containerName="extract-utilities" Jan 29 15:35:16 crc kubenswrapper[4620]: I0129 15:35:16.892536 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" containerName="extract-utilities" Jan 29 15:35:16 crc kubenswrapper[4620]: E0129 15:35:16.892547 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" containerName="registry-server" Jan 29 15:35:16 crc kubenswrapper[4620]: I0129 15:35:16.892552 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" containerName="registry-server" Jan 29 15:35:16 crc kubenswrapper[4620]: E0129 15:35:16.892561 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" containerName="extract-content" Jan 29 15:35:16 crc kubenswrapper[4620]: I0129 15:35:16.892567 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" containerName="extract-content" Jan 29 15:35:16 crc kubenswrapper[4620]: E0129 15:35:16.892576 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" containerName="extract-utilities" Jan 29 15:35:16 crc kubenswrapper[4620]: I0129 15:35:16.892582 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" containerName="extract-utilities" Jan 29 15:35:16 crc kubenswrapper[4620]: E0129 15:35:16.892590 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" containerName="registry-server" Jan 29 15:35:16 crc kubenswrapper[4620]: I0129 15:35:16.892596 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" containerName="registry-server" Jan 29 15:35:16 crc kubenswrapper[4620]: I0129 15:35:16.892715 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="297df0d0-8334-4c6e-8b11-7eb77e360dc1" containerName="registry-server" Jan 29 15:35:16 crc kubenswrapper[4620]: I0129 15:35:16.892735 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="c80155fc-d965-455f-9349-49a1c8eaec07" containerName="registry-server" Jan 29 15:35:16 crc kubenswrapper[4620]: I0129 15:35:16.893681 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:16 crc kubenswrapper[4620]: I0129 15:35:16.908631 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fdf62"] Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.027711 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc715d18-f484-4783-b6f8-4f85f14dfa08-catalog-content\") pod \"redhat-operators-fdf62\" (UID: \"bc715d18-f484-4783-b6f8-4f85f14dfa08\") " pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.027882 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8nlg\" (UniqueName: \"kubernetes.io/projected/bc715d18-f484-4783-b6f8-4f85f14dfa08-kube-api-access-q8nlg\") pod \"redhat-operators-fdf62\" (UID: \"bc715d18-f484-4783-b6f8-4f85f14dfa08\") " pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.027914 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc715d18-f484-4783-b6f8-4f85f14dfa08-utilities\") pod \"redhat-operators-fdf62\" (UID: \"bc715d18-f484-4783-b6f8-4f85f14dfa08\") " pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.129254 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc715d18-f484-4783-b6f8-4f85f14dfa08-utilities\") pod \"redhat-operators-fdf62\" (UID: \"bc715d18-f484-4783-b6f8-4f85f14dfa08\") " pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.129331 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc715d18-f484-4783-b6f8-4f85f14dfa08-catalog-content\") pod \"redhat-operators-fdf62\" (UID: \"bc715d18-f484-4783-b6f8-4f85f14dfa08\") " pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.129399 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8nlg\" (UniqueName: \"kubernetes.io/projected/bc715d18-f484-4783-b6f8-4f85f14dfa08-kube-api-access-q8nlg\") pod \"redhat-operators-fdf62\" (UID: \"bc715d18-f484-4783-b6f8-4f85f14dfa08\") " pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.130178 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc715d18-f484-4783-b6f8-4f85f14dfa08-utilities\") pod \"redhat-operators-fdf62\" (UID: \"bc715d18-f484-4783-b6f8-4f85f14dfa08\") " pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.130395 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc715d18-f484-4783-b6f8-4f85f14dfa08-catalog-content\") pod \"redhat-operators-fdf62\" (UID: \"bc715d18-f484-4783-b6f8-4f85f14dfa08\") " pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.149041 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8nlg\" (UniqueName: \"kubernetes.io/projected/bc715d18-f484-4783-b6f8-4f85f14dfa08-kube-api-access-q8nlg\") pod \"redhat-operators-fdf62\" (UID: \"bc715d18-f484-4783-b6f8-4f85f14dfa08\") " pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.217546 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.642737 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fdf62"] Jan 29 15:35:17 crc kubenswrapper[4620]: W0129 15:35:17.650087 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc715d18_f484_4783_b6f8_4f85f14dfa08.slice/crio-e63da18a3f7ad5a8fc3b089e4777684c25190f9f6f63a9948fa473f58edbdf5c WatchSource:0}: Error finding container e63da18a3f7ad5a8fc3b089e4777684c25190f9f6f63a9948fa473f58edbdf5c: Status 404 returned error can't find the container with id e63da18a3f7ad5a8fc3b089e4777684c25190f9f6f63a9948fa473f58edbdf5c Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.947852 4620 generic.go:334] "Generic (PLEG): container finished" podID="bc715d18-f484-4783-b6f8-4f85f14dfa08" containerID="c4299b7f04503853c72f62cc5abb218a45c667e4363a5ef83a6ebffcefc4f356" exitCode=0 Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.947891 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fdf62" event={"ID":"bc715d18-f484-4783-b6f8-4f85f14dfa08","Type":"ContainerDied","Data":"c4299b7f04503853c72f62cc5abb218a45c667e4363a5ef83a6ebffcefc4f356"} Jan 29 15:35:17 crc kubenswrapper[4620]: I0129 15:35:17.947914 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fdf62" event={"ID":"bc715d18-f484-4783-b6f8-4f85f14dfa08","Type":"ContainerStarted","Data":"e63da18a3f7ad5a8fc3b089e4777684c25190f9f6f63a9948fa473f58edbdf5c"} Jan 29 15:35:18 crc kubenswrapper[4620]: I0129 15:35:18.956559 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fdf62" event={"ID":"bc715d18-f484-4783-b6f8-4f85f14dfa08","Type":"ContainerStarted","Data":"d2a69b696976ae7161fbf099bbce54cdc098b8aac1d961ea5308fdc4ed87977f"} Jan 29 15:35:22 crc kubenswrapper[4620]: I0129 15:35:22.980913 4620 generic.go:334] "Generic (PLEG): container finished" podID="bc715d18-f484-4783-b6f8-4f85f14dfa08" containerID="d2a69b696976ae7161fbf099bbce54cdc098b8aac1d961ea5308fdc4ed87977f" exitCode=0 Jan 29 15:35:22 crc kubenswrapper[4620]: I0129 15:35:22.981389 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fdf62" event={"ID":"bc715d18-f484-4783-b6f8-4f85f14dfa08","Type":"ContainerDied","Data":"d2a69b696976ae7161fbf099bbce54cdc098b8aac1d961ea5308fdc4ed87977f"} Jan 29 15:35:24 crc kubenswrapper[4620]: I0129 15:35:24.998867 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fdf62" event={"ID":"bc715d18-f484-4783-b6f8-4f85f14dfa08","Type":"ContainerStarted","Data":"6aaab9b0a5e95afec0084f26e0db5657854061ccd5d5dfa959d366a2a3c1cd4e"} Jan 29 15:35:25 crc kubenswrapper[4620]: I0129 15:35:25.022352 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fdf62" podStartSLOduration=2.692475259 podStartE2EDuration="9.022335459s" podCreationTimestamp="2026-01-29 15:35:16 +0000 UTC" firstStartedPulling="2026-01-29 15:35:17.950428704 +0000 UTC m=+2058.563256349" lastFinishedPulling="2026-01-29 15:35:24.280288854 +0000 UTC m=+2064.893116549" observedRunningTime="2026-01-29 15:35:25.020261424 +0000 UTC m=+2065.633089099" watchObservedRunningTime="2026-01-29 15:35:25.022335459 +0000 UTC m=+2065.635163124" Jan 29 15:35:27 crc kubenswrapper[4620]: I0129 15:35:27.219102 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:27 crc kubenswrapper[4620]: I0129 15:35:27.219165 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:28 crc kubenswrapper[4620]: I0129 15:35:28.265511 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fdf62" podUID="bc715d18-f484-4783-b6f8-4f85f14dfa08" containerName="registry-server" probeResult="failure" output=< Jan 29 15:35:28 crc kubenswrapper[4620]: timeout: failed to connect service ":50051" within 1s Jan 29 15:35:28 crc kubenswrapper[4620]: > Jan 29 15:35:34 crc kubenswrapper[4620]: I0129 15:35:34.111188 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:35:34 crc kubenswrapper[4620]: I0129 15:35:34.112003 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:35:34 crc kubenswrapper[4620]: I0129 15:35:34.112052 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:35:34 crc kubenswrapper[4620]: I0129 15:35:34.112680 4620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4481636fef9a26707f61498a2c56f8b28e0ec213403fd672455df1021d2e8d0"} pod="openshift-machine-config-operator/machine-config-daemon-7469t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:35:34 crc kubenswrapper[4620]: I0129 15:35:34.112725 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" containerID="cri-o://c4481636fef9a26707f61498a2c56f8b28e0ec213403fd672455df1021d2e8d0" gracePeriod=600 Jan 29 15:35:35 crc kubenswrapper[4620]: I0129 15:35:35.088814 4620 generic.go:334] "Generic (PLEG): container finished" podID="a76cce43-3d01-4158-b23a-e21fd5927792" containerID="c4481636fef9a26707f61498a2c56f8b28e0ec213403fd672455df1021d2e8d0" exitCode=0 Jan 29 15:35:35 crc kubenswrapper[4620]: I0129 15:35:35.088901 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerDied","Data":"c4481636fef9a26707f61498a2c56f8b28e0ec213403fd672455df1021d2e8d0"} Jan 29 15:35:35 crc kubenswrapper[4620]: I0129 15:35:35.089384 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19"} Jan 29 15:35:35 crc kubenswrapper[4620]: I0129 15:35:35.089421 4620 scope.go:117] "RemoveContainer" containerID="68f4c5715be8136c1e3906374e8563b4dc0c4fd172d89adda2e4a2ef221af03c" Jan 29 15:35:37 crc kubenswrapper[4620]: I0129 15:35:37.284080 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:37 crc kubenswrapper[4620]: I0129 15:35:37.344986 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:37 crc kubenswrapper[4620]: I0129 15:35:37.527890 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fdf62"] Jan 29 15:35:39 crc kubenswrapper[4620]: I0129 15:35:39.124977 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fdf62" podUID="bc715d18-f484-4783-b6f8-4f85f14dfa08" containerName="registry-server" containerID="cri-o://6aaab9b0a5e95afec0084f26e0db5657854061ccd5d5dfa959d366a2a3c1cd4e" gracePeriod=2 Jan 29 15:35:39 crc kubenswrapper[4620]: I0129 15:35:39.570430 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:39 crc kubenswrapper[4620]: I0129 15:35:39.770971 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc715d18-f484-4783-b6f8-4f85f14dfa08-utilities\") pod \"bc715d18-f484-4783-b6f8-4f85f14dfa08\" (UID: \"bc715d18-f484-4783-b6f8-4f85f14dfa08\") " Jan 29 15:35:39 crc kubenswrapper[4620]: I0129 15:35:39.771030 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc715d18-f484-4783-b6f8-4f85f14dfa08-catalog-content\") pod \"bc715d18-f484-4783-b6f8-4f85f14dfa08\" (UID: \"bc715d18-f484-4783-b6f8-4f85f14dfa08\") " Jan 29 15:35:39 crc kubenswrapper[4620]: I0129 15:35:39.771061 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8nlg\" (UniqueName: \"kubernetes.io/projected/bc715d18-f484-4783-b6f8-4f85f14dfa08-kube-api-access-q8nlg\") pod \"bc715d18-f484-4783-b6f8-4f85f14dfa08\" (UID: \"bc715d18-f484-4783-b6f8-4f85f14dfa08\") " Jan 29 15:35:39 crc kubenswrapper[4620]: I0129 15:35:39.776089 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc715d18-f484-4783-b6f8-4f85f14dfa08-utilities" (OuterVolumeSpecName: "utilities") pod "bc715d18-f484-4783-b6f8-4f85f14dfa08" (UID: "bc715d18-f484-4783-b6f8-4f85f14dfa08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:39 crc kubenswrapper[4620]: I0129 15:35:39.789368 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc715d18-f484-4783-b6f8-4f85f14dfa08-kube-api-access-q8nlg" (OuterVolumeSpecName: "kube-api-access-q8nlg") pod "bc715d18-f484-4783-b6f8-4f85f14dfa08" (UID: "bc715d18-f484-4783-b6f8-4f85f14dfa08"). InnerVolumeSpecName "kube-api-access-q8nlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:39 crc kubenswrapper[4620]: I0129 15:35:39.872546 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc715d18-f484-4783-b6f8-4f85f14dfa08-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:39 crc kubenswrapper[4620]: I0129 15:35:39.872903 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8nlg\" (UniqueName: \"kubernetes.io/projected/bc715d18-f484-4783-b6f8-4f85f14dfa08-kube-api-access-q8nlg\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:39 crc kubenswrapper[4620]: I0129 15:35:39.952074 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc715d18-f484-4783-b6f8-4f85f14dfa08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc715d18-f484-4783-b6f8-4f85f14dfa08" (UID: "bc715d18-f484-4783-b6f8-4f85f14dfa08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:39 crc kubenswrapper[4620]: I0129 15:35:39.976695 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc715d18-f484-4783-b6f8-4f85f14dfa08-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.135424 4620 generic.go:334] "Generic (PLEG): container finished" podID="bc715d18-f484-4783-b6f8-4f85f14dfa08" containerID="6aaab9b0a5e95afec0084f26e0db5657854061ccd5d5dfa959d366a2a3c1cd4e" exitCode=0 Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.135478 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fdf62" event={"ID":"bc715d18-f484-4783-b6f8-4f85f14dfa08","Type":"ContainerDied","Data":"6aaab9b0a5e95afec0084f26e0db5657854061ccd5d5dfa959d366a2a3c1cd4e"} Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.135508 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fdf62" event={"ID":"bc715d18-f484-4783-b6f8-4f85f14dfa08","Type":"ContainerDied","Data":"e63da18a3f7ad5a8fc3b089e4777684c25190f9f6f63a9948fa473f58edbdf5c"} Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.135528 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fdf62" Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.135543 4620 scope.go:117] "RemoveContainer" containerID="6aaab9b0a5e95afec0084f26e0db5657854061ccd5d5dfa959d366a2a3c1cd4e" Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.159943 4620 scope.go:117] "RemoveContainer" containerID="d2a69b696976ae7161fbf099bbce54cdc098b8aac1d961ea5308fdc4ed87977f" Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.197242 4620 scope.go:117] "RemoveContainer" containerID="c4299b7f04503853c72f62cc5abb218a45c667e4363a5ef83a6ebffcefc4f356" Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.208906 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fdf62"] Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.216909 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fdf62"] Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.241383 4620 scope.go:117] "RemoveContainer" containerID="6aaab9b0a5e95afec0084f26e0db5657854061ccd5d5dfa959d366a2a3c1cd4e" Jan 29 15:35:40 crc kubenswrapper[4620]: E0129 15:35:40.241943 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aaab9b0a5e95afec0084f26e0db5657854061ccd5d5dfa959d366a2a3c1cd4e\": container with ID starting with 6aaab9b0a5e95afec0084f26e0db5657854061ccd5d5dfa959d366a2a3c1cd4e not found: ID does not exist" containerID="6aaab9b0a5e95afec0084f26e0db5657854061ccd5d5dfa959d366a2a3c1cd4e" Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.241987 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aaab9b0a5e95afec0084f26e0db5657854061ccd5d5dfa959d366a2a3c1cd4e"} err="failed to get container status \"6aaab9b0a5e95afec0084f26e0db5657854061ccd5d5dfa959d366a2a3c1cd4e\": rpc error: code = NotFound desc = could not find container \"6aaab9b0a5e95afec0084f26e0db5657854061ccd5d5dfa959d366a2a3c1cd4e\": container with ID starting with 6aaab9b0a5e95afec0084f26e0db5657854061ccd5d5dfa959d366a2a3c1cd4e not found: ID does not exist" Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.242019 4620 scope.go:117] "RemoveContainer" containerID="d2a69b696976ae7161fbf099bbce54cdc098b8aac1d961ea5308fdc4ed87977f" Jan 29 15:35:40 crc kubenswrapper[4620]: E0129 15:35:40.242355 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a69b696976ae7161fbf099bbce54cdc098b8aac1d961ea5308fdc4ed87977f\": container with ID starting with d2a69b696976ae7161fbf099bbce54cdc098b8aac1d961ea5308fdc4ed87977f not found: ID does not exist" containerID="d2a69b696976ae7161fbf099bbce54cdc098b8aac1d961ea5308fdc4ed87977f" Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.242466 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a69b696976ae7161fbf099bbce54cdc098b8aac1d961ea5308fdc4ed87977f"} err="failed to get container status \"d2a69b696976ae7161fbf099bbce54cdc098b8aac1d961ea5308fdc4ed87977f\": rpc error: code = NotFound desc = could not find container \"d2a69b696976ae7161fbf099bbce54cdc098b8aac1d961ea5308fdc4ed87977f\": container with ID starting with d2a69b696976ae7161fbf099bbce54cdc098b8aac1d961ea5308fdc4ed87977f not found: ID does not exist" Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.242575 4620 scope.go:117] "RemoveContainer" containerID="c4299b7f04503853c72f62cc5abb218a45c667e4363a5ef83a6ebffcefc4f356" Jan 29 15:35:40 crc kubenswrapper[4620]: E0129 15:35:40.242987 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4299b7f04503853c72f62cc5abb218a45c667e4363a5ef83a6ebffcefc4f356\": container with ID starting with c4299b7f04503853c72f62cc5abb218a45c667e4363a5ef83a6ebffcefc4f356 not found: ID does not exist" containerID="c4299b7f04503853c72f62cc5abb218a45c667e4363a5ef83a6ebffcefc4f356" Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.243041 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4299b7f04503853c72f62cc5abb218a45c667e4363a5ef83a6ebffcefc4f356"} err="failed to get container status \"c4299b7f04503853c72f62cc5abb218a45c667e4363a5ef83a6ebffcefc4f356\": rpc error: code = NotFound desc = could not find container \"c4299b7f04503853c72f62cc5abb218a45c667e4363a5ef83a6ebffcefc4f356\": container with ID starting with c4299b7f04503853c72f62cc5abb218a45c667e4363a5ef83a6ebffcefc4f356 not found: ID does not exist" Jan 29 15:35:40 crc kubenswrapper[4620]: I0129 15:35:40.883284 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc715d18-f484-4783-b6f8-4f85f14dfa08" path="/var/lib/kubelet/pods/bc715d18-f484-4783-b6f8-4f85f14dfa08/volumes" Jan 29 15:37:34 crc kubenswrapper[4620]: I0129 15:37:34.110919 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:37:34 crc kubenswrapper[4620]: I0129 15:37:34.111439 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:38:04 crc kubenswrapper[4620]: I0129 15:38:04.111387 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:38:04 crc kubenswrapper[4620]: I0129 15:38:04.112096 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:38:34 crc kubenswrapper[4620]: I0129 15:38:34.111451 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:38:34 crc kubenswrapper[4620]: I0129 15:38:34.112302 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:38:34 crc kubenswrapper[4620]: I0129 15:38:34.112392 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:38:34 crc kubenswrapper[4620]: I0129 15:38:34.113462 4620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19"} pod="openshift-machine-config-operator/machine-config-daemon-7469t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:38:34 crc kubenswrapper[4620]: I0129 15:38:34.113555 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" containerID="cri-o://9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" gracePeriod=600 Jan 29 15:38:34 crc kubenswrapper[4620]: E0129 15:38:34.265153 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:38:34 crc kubenswrapper[4620]: I0129 15:38:34.565227 4620 generic.go:334] "Generic (PLEG): container finished" podID="a76cce43-3d01-4158-b23a-e21fd5927792" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" exitCode=0 Jan 29 15:38:34 crc kubenswrapper[4620]: I0129 15:38:34.565285 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerDied","Data":"9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19"} Jan 29 15:38:34 crc kubenswrapper[4620]: I0129 15:38:34.565361 4620 scope.go:117] "RemoveContainer" containerID="c4481636fef9a26707f61498a2c56f8b28e0ec213403fd672455df1021d2e8d0" Jan 29 15:38:34 crc kubenswrapper[4620]: I0129 15:38:34.566046 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:38:34 crc kubenswrapper[4620]: E0129 15:38:34.566345 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:38:48 crc kubenswrapper[4620]: I0129 15:38:48.872821 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:38:48 crc kubenswrapper[4620]: E0129 15:38:48.873932 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:39:03 crc kubenswrapper[4620]: I0129 15:39:03.873738 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:39:03 crc kubenswrapper[4620]: E0129 15:39:03.875187 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:39:17 crc kubenswrapper[4620]: I0129 15:39:17.872799 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:39:17 crc kubenswrapper[4620]: E0129 15:39:17.874022 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:39:31 crc kubenswrapper[4620]: I0129 15:39:31.871959 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:39:31 crc kubenswrapper[4620]: E0129 15:39:31.872667 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:39:42 crc kubenswrapper[4620]: I0129 15:39:42.872999 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:39:42 crc kubenswrapper[4620]: E0129 15:39:42.874835 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:39:57 crc kubenswrapper[4620]: I0129 15:39:57.873045 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:39:57 crc kubenswrapper[4620]: E0129 15:39:57.873753 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:40:10 crc kubenswrapper[4620]: I0129 15:40:10.877673 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:40:10 crc kubenswrapper[4620]: E0129 15:40:10.878432 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:40:24 crc kubenswrapper[4620]: I0129 15:40:24.872800 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:40:24 crc kubenswrapper[4620]: E0129 15:40:24.873986 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:40:38 crc kubenswrapper[4620]: I0129 15:40:38.872682 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:40:38 crc kubenswrapper[4620]: E0129 15:40:38.873372 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:40:53 crc kubenswrapper[4620]: I0129 15:40:53.873129 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:40:53 crc kubenswrapper[4620]: E0129 15:40:53.873858 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:41:08 crc kubenswrapper[4620]: I0129 15:41:08.872780 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:41:08 crc kubenswrapper[4620]: E0129 15:41:08.873194 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:41:19 crc kubenswrapper[4620]: I0129 15:41:19.872358 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:41:19 crc kubenswrapper[4620]: E0129 15:41:19.873305 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:41:32 crc kubenswrapper[4620]: I0129 15:41:32.872536 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:41:32 crc kubenswrapper[4620]: E0129 15:41:32.873293 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:41:43 crc kubenswrapper[4620]: I0129 15:41:43.872417 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:41:43 crc kubenswrapper[4620]: E0129 15:41:43.873259 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:41:55 crc kubenswrapper[4620]: I0129 15:41:55.872798 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:41:55 crc kubenswrapper[4620]: E0129 15:41:55.873482 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:42:07 crc kubenswrapper[4620]: I0129 15:42:07.872736 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:42:07 crc kubenswrapper[4620]: E0129 15:42:07.873868 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:42:18 crc kubenswrapper[4620]: I0129 15:42:18.872289 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:42:18 crc kubenswrapper[4620]: E0129 15:42:18.872921 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:42:32 crc kubenswrapper[4620]: I0129 15:42:32.872180 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:42:32 crc kubenswrapper[4620]: E0129 15:42:32.872866 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:42:42 crc kubenswrapper[4620]: I0129 15:42:42.939023 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-44m7n"] Jan 29 15:42:42 crc kubenswrapper[4620]: E0129 15:42:42.939802 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc715d18-f484-4783-b6f8-4f85f14dfa08" containerName="extract-utilities" Jan 29 15:42:42 crc kubenswrapper[4620]: I0129 15:42:42.939817 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc715d18-f484-4783-b6f8-4f85f14dfa08" containerName="extract-utilities" Jan 29 15:42:42 crc kubenswrapper[4620]: E0129 15:42:42.939836 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc715d18-f484-4783-b6f8-4f85f14dfa08" containerName="registry-server" Jan 29 15:42:42 crc kubenswrapper[4620]: I0129 15:42:42.939842 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc715d18-f484-4783-b6f8-4f85f14dfa08" containerName="registry-server" Jan 29 15:42:42 crc kubenswrapper[4620]: E0129 15:42:42.939860 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc715d18-f484-4783-b6f8-4f85f14dfa08" containerName="extract-content" Jan 29 15:42:42 crc kubenswrapper[4620]: I0129 15:42:42.939866 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc715d18-f484-4783-b6f8-4f85f14dfa08" containerName="extract-content" Jan 29 15:42:42 crc kubenswrapper[4620]: I0129 15:42:42.944276 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc715d18-f484-4783-b6f8-4f85f14dfa08" containerName="registry-server" Jan 29 15:42:42 crc kubenswrapper[4620]: I0129 15:42:42.945634 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:42:42 crc kubenswrapper[4620]: I0129 15:42:42.961270 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-44m7n"] Jan 29 15:42:43 crc kubenswrapper[4620]: I0129 15:42:43.039163 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29gkh\" (UniqueName: \"kubernetes.io/projected/aa8615ea-eb4a-485e-af48-d166ccfb0062-kube-api-access-29gkh\") pod \"community-operators-44m7n\" (UID: \"aa8615ea-eb4a-485e-af48-d166ccfb0062\") " pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:42:43 crc kubenswrapper[4620]: I0129 15:42:43.039443 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8615ea-eb4a-485e-af48-d166ccfb0062-utilities\") pod \"community-operators-44m7n\" (UID: \"aa8615ea-eb4a-485e-af48-d166ccfb0062\") " pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:42:43 crc kubenswrapper[4620]: I0129 15:42:43.039571 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8615ea-eb4a-485e-af48-d166ccfb0062-catalog-content\") pod \"community-operators-44m7n\" (UID: \"aa8615ea-eb4a-485e-af48-d166ccfb0062\") " pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:42:43 crc kubenswrapper[4620]: I0129 15:42:43.140686 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29gkh\" (UniqueName: \"kubernetes.io/projected/aa8615ea-eb4a-485e-af48-d166ccfb0062-kube-api-access-29gkh\") pod \"community-operators-44m7n\" (UID: \"aa8615ea-eb4a-485e-af48-d166ccfb0062\") " pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:42:43 crc kubenswrapper[4620]: I0129 15:42:43.140742 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8615ea-eb4a-485e-af48-d166ccfb0062-utilities\") pod \"community-operators-44m7n\" (UID: \"aa8615ea-eb4a-485e-af48-d166ccfb0062\") " pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:42:43 crc kubenswrapper[4620]: I0129 15:42:43.140798 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8615ea-eb4a-485e-af48-d166ccfb0062-catalog-content\") pod \"community-operators-44m7n\" (UID: \"aa8615ea-eb4a-485e-af48-d166ccfb0062\") " pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:42:43 crc kubenswrapper[4620]: I0129 15:42:43.141248 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8615ea-eb4a-485e-af48-d166ccfb0062-catalog-content\") pod \"community-operators-44m7n\" (UID: \"aa8615ea-eb4a-485e-af48-d166ccfb0062\") " pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:42:43 crc kubenswrapper[4620]: I0129 15:42:43.141403 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8615ea-eb4a-485e-af48-d166ccfb0062-utilities\") pod \"community-operators-44m7n\" (UID: \"aa8615ea-eb4a-485e-af48-d166ccfb0062\") " pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:42:43 crc kubenswrapper[4620]: I0129 15:42:43.172840 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29gkh\" (UniqueName: \"kubernetes.io/projected/aa8615ea-eb4a-485e-af48-d166ccfb0062-kube-api-access-29gkh\") pod \"community-operators-44m7n\" (UID: \"aa8615ea-eb4a-485e-af48-d166ccfb0062\") " pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:42:43 crc kubenswrapper[4620]: I0129 15:42:43.262932 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:42:43 crc kubenswrapper[4620]: I0129 15:42:43.812666 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-44m7n"] Jan 29 15:42:44 crc kubenswrapper[4620]: I0129 15:42:44.598922 4620 generic.go:334] "Generic (PLEG): container finished" podID="aa8615ea-eb4a-485e-af48-d166ccfb0062" containerID="8c4896c94cce28c248bc5bdfcf67053a4143234d7dfae664ccd596ce3a061313" exitCode=0 Jan 29 15:42:44 crc kubenswrapper[4620]: I0129 15:42:44.599075 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44m7n" event={"ID":"aa8615ea-eb4a-485e-af48-d166ccfb0062","Type":"ContainerDied","Data":"8c4896c94cce28c248bc5bdfcf67053a4143234d7dfae664ccd596ce3a061313"} Jan 29 15:42:44 crc kubenswrapper[4620]: I0129 15:42:44.599298 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44m7n" event={"ID":"aa8615ea-eb4a-485e-af48-d166ccfb0062","Type":"ContainerStarted","Data":"8542697d5a93b24bf2f7681bccf17674f1dc529dbfe26ef2ab9823574b0c6224"} Jan 29 15:42:44 crc kubenswrapper[4620]: I0129 15:42:44.600641 4620 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:42:44 crc kubenswrapper[4620]: E0129 15:42:44.774228 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:42:44 crc kubenswrapper[4620]: E0129 15:42:44.774402 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29gkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-44m7n_openshift-marketplace(aa8615ea-eb4a-485e-af48-d166ccfb0062): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:42:44 crc kubenswrapper[4620]: E0129 15:42:44.775814 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:42:44 crc kubenswrapper[4620]: I0129 15:42:44.872733 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:42:44 crc kubenswrapper[4620]: E0129 15:42:44.872963 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:42:45 crc kubenswrapper[4620]: E0129 15:42:45.620579 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:42:59 crc kubenswrapper[4620]: I0129 15:42:59.872926 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:42:59 crc kubenswrapper[4620]: E0129 15:42:59.873687 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:43:00 crc kubenswrapper[4620]: E0129 15:43:00.331419 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:43:00 crc kubenswrapper[4620]: E0129 15:43:00.331889 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29gkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-44m7n_openshift-marketplace(aa8615ea-eb4a-485e-af48-d166ccfb0062): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:43:00 crc kubenswrapper[4620]: E0129 15:43:00.333339 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:43:10 crc kubenswrapper[4620]: I0129 15:43:10.892514 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:43:10 crc kubenswrapper[4620]: E0129 15:43:10.894237 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:43:11 crc kubenswrapper[4620]: E0129 15:43:11.873987 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:43:22 crc kubenswrapper[4620]: I0129 15:43:22.872359 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:43:22 crc kubenswrapper[4620]: E0129 15:43:22.873508 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:43:24 crc kubenswrapper[4620]: E0129 15:43:24.004579 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:43:24 crc kubenswrapper[4620]: E0129 15:43:24.005329 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29gkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-44m7n_openshift-marketplace(aa8615ea-eb4a-485e-af48-d166ccfb0062): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:43:24 crc kubenswrapper[4620]: E0129 15:43:24.006476 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:43:33 crc kubenswrapper[4620]: I0129 15:43:33.873032 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:43:33 crc kubenswrapper[4620]: E0129 15:43:33.873913 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:43:38 crc kubenswrapper[4620]: E0129 15:43:38.874803 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:43:44 crc kubenswrapper[4620]: I0129 15:43:44.872640 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:43:46 crc kubenswrapper[4620]: I0129 15:43:46.056545 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"0363493f5282aa072e0d3aa4dfc81106b236f93cd58f3bf51671a562d1114bd3"} Jan 29 15:43:49 crc kubenswrapper[4620]: E0129 15:43:49.875641 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:44:03 crc kubenswrapper[4620]: E0129 15:44:03.874132 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:44:15 crc kubenswrapper[4620]: E0129 15:44:15.006269 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:44:15 crc kubenswrapper[4620]: E0129 15:44:15.006845 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29gkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-44m7n_openshift-marketplace(aa8615ea-eb4a-485e-af48-d166ccfb0062): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:15 crc kubenswrapper[4620]: E0129 15:44:15.008023 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:44:24 crc kubenswrapper[4620]: I0129 15:44:24.770608 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s6zv7"] Jan 29 15:44:24 crc kubenswrapper[4620]: I0129 15:44:24.796944 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:44:24 crc kubenswrapper[4620]: I0129 15:44:24.818018 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6zv7"] Jan 29 15:44:24 crc kubenswrapper[4620]: I0129 15:44:24.841665 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddc4w\" (UniqueName: \"kubernetes.io/projected/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-kube-api-access-ddc4w\") pod \"redhat-marketplace-s6zv7\" (UID: \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\") " pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:44:24 crc kubenswrapper[4620]: I0129 15:44:24.841855 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-catalog-content\") pod \"redhat-marketplace-s6zv7\" (UID: \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\") " pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:44:24 crc kubenswrapper[4620]: I0129 15:44:24.841989 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-utilities\") pod \"redhat-marketplace-s6zv7\" (UID: \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\") " pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:44:24 crc kubenswrapper[4620]: I0129 15:44:24.942934 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-catalog-content\") pod \"redhat-marketplace-s6zv7\" (UID: \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\") " pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:44:24 crc kubenswrapper[4620]: I0129 15:44:24.943316 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-utilities\") pod \"redhat-marketplace-s6zv7\" (UID: \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\") " pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:44:24 crc kubenswrapper[4620]: I0129 15:44:24.943445 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddc4w\" (UniqueName: \"kubernetes.io/projected/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-kube-api-access-ddc4w\") pod \"redhat-marketplace-s6zv7\" (UID: \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\") " pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:44:24 crc kubenswrapper[4620]: I0129 15:44:24.944072 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-catalog-content\") pod \"redhat-marketplace-s6zv7\" (UID: \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\") " pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:44:24 crc kubenswrapper[4620]: I0129 15:44:24.944093 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-utilities\") pod \"redhat-marketplace-s6zv7\" (UID: \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\") " pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:44:24 crc kubenswrapper[4620]: I0129 15:44:24.968820 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddc4w\" (UniqueName: \"kubernetes.io/projected/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-kube-api-access-ddc4w\") pod \"redhat-marketplace-s6zv7\" (UID: \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\") " pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:44:25 crc kubenswrapper[4620]: I0129 15:44:25.158721 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:44:25 crc kubenswrapper[4620]: I0129 15:44:25.590602 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6zv7"] Jan 29 15:44:26 crc kubenswrapper[4620]: I0129 15:44:26.356164 4620 generic.go:334] "Generic (PLEG): container finished" podID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" containerID="d9ea0e08891fbdeee9aa2c09ac6a69ce2650893d5ed86bcbe294f0d41e952706" exitCode=0 Jan 29 15:44:26 crc kubenswrapper[4620]: I0129 15:44:26.356228 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zv7" event={"ID":"11cdc57e-71d4-44ba-832d-58ad4bc76d2a","Type":"ContainerDied","Data":"d9ea0e08891fbdeee9aa2c09ac6a69ce2650893d5ed86bcbe294f0d41e952706"} Jan 29 15:44:26 crc kubenswrapper[4620]: I0129 15:44:26.356584 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zv7" event={"ID":"11cdc57e-71d4-44ba-832d-58ad4bc76d2a","Type":"ContainerStarted","Data":"02f5d8704bf703fcb0129e44036b7230ca4e5e1b1a9edad04f5fe57e81abb8f0"} Jan 29 15:44:26 crc kubenswrapper[4620]: E0129 15:44:26.480864 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:44:26 crc kubenswrapper[4620]: E0129 15:44:26.481009 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddc4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s6zv7_openshift-marketplace(11cdc57e-71d4-44ba-832d-58ad4bc76d2a): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:26 crc kubenswrapper[4620]: E0129 15:44:26.482600 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-s6zv7" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" Jan 29 15:44:27 crc kubenswrapper[4620]: E0129 15:44:27.362735 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-s6zv7" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" Jan 29 15:44:29 crc kubenswrapper[4620]: E0129 15:44:29.874673 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:44:41 crc kubenswrapper[4620]: E0129 15:44:41.005852 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:44:41 crc kubenswrapper[4620]: E0129 15:44:41.006916 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddc4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s6zv7_openshift-marketplace(11cdc57e-71d4-44ba-832d-58ad4bc76d2a): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:41 crc kubenswrapper[4620]: E0129 15:44:41.008175 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-s6zv7" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" Jan 29 15:44:44 crc kubenswrapper[4620]: E0129 15:44:44.875351 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:44:55 crc kubenswrapper[4620]: E0129 15:44:55.874660 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-s6zv7" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" Jan 29 15:44:56 crc kubenswrapper[4620]: E0129 15:44:56.874582 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.146324 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g"] Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.147748 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.149732 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.150523 4620 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.159378 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g"] Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.270021 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc9e836e-74dc-41d7-b428-21c353eee738-config-volume\") pod \"collect-profiles-29495025-lgm5g\" (UID: \"fc9e836e-74dc-41d7-b428-21c353eee738\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.270090 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc9e836e-74dc-41d7-b428-21c353eee738-secret-volume\") pod \"collect-profiles-29495025-lgm5g\" (UID: \"fc9e836e-74dc-41d7-b428-21c353eee738\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.270124 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzfjq\" (UniqueName: \"kubernetes.io/projected/fc9e836e-74dc-41d7-b428-21c353eee738-kube-api-access-gzfjq\") pod \"collect-profiles-29495025-lgm5g\" (UID: \"fc9e836e-74dc-41d7-b428-21c353eee738\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.371807 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzfjq\" (UniqueName: \"kubernetes.io/projected/fc9e836e-74dc-41d7-b428-21c353eee738-kube-api-access-gzfjq\") pod \"collect-profiles-29495025-lgm5g\" (UID: \"fc9e836e-74dc-41d7-b428-21c353eee738\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.371908 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc9e836e-74dc-41d7-b428-21c353eee738-config-volume\") pod \"collect-profiles-29495025-lgm5g\" (UID: \"fc9e836e-74dc-41d7-b428-21c353eee738\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.371949 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc9e836e-74dc-41d7-b428-21c353eee738-secret-volume\") pod \"collect-profiles-29495025-lgm5g\" (UID: \"fc9e836e-74dc-41d7-b428-21c353eee738\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.372924 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc9e836e-74dc-41d7-b428-21c353eee738-config-volume\") pod \"collect-profiles-29495025-lgm5g\" (UID: \"fc9e836e-74dc-41d7-b428-21c353eee738\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.380865 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc9e836e-74dc-41d7-b428-21c353eee738-secret-volume\") pod \"collect-profiles-29495025-lgm5g\" (UID: \"fc9e836e-74dc-41d7-b428-21c353eee738\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.390658 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzfjq\" (UniqueName: \"kubernetes.io/projected/fc9e836e-74dc-41d7-b428-21c353eee738-kube-api-access-gzfjq\") pod \"collect-profiles-29495025-lgm5g\" (UID: \"fc9e836e-74dc-41d7-b428-21c353eee738\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.491663 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" Jan 29 15:45:00 crc kubenswrapper[4620]: I0129 15:45:00.921937 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g"] Jan 29 15:45:00 crc kubenswrapper[4620]: W0129 15:45:00.927849 4620 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc9e836e_74dc_41d7_b428_21c353eee738.slice/crio-b54f7a968bdeca53b7ee5bd158ba70814838e7ddf4d6670a50a0979a69a0fea3 WatchSource:0}: Error finding container b54f7a968bdeca53b7ee5bd158ba70814838e7ddf4d6670a50a0979a69a0fea3: Status 404 returned error can't find the container with id b54f7a968bdeca53b7ee5bd158ba70814838e7ddf4d6670a50a0979a69a0fea3 Jan 29 15:45:01 crc kubenswrapper[4620]: I0129 15:45:01.571874 4620 generic.go:334] "Generic (PLEG): container finished" podID="fc9e836e-74dc-41d7-b428-21c353eee738" containerID="213dfb4c26f23d3f7fa0096e3be0c4070d6d21880950f8e5810c7e812bb88a14" exitCode=0 Jan 29 15:45:01 crc kubenswrapper[4620]: I0129 15:45:01.571936 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" event={"ID":"fc9e836e-74dc-41d7-b428-21c353eee738","Type":"ContainerDied","Data":"213dfb4c26f23d3f7fa0096e3be0c4070d6d21880950f8e5810c7e812bb88a14"} Jan 29 15:45:01 crc kubenswrapper[4620]: I0129 15:45:01.572859 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" event={"ID":"fc9e836e-74dc-41d7-b428-21c353eee738","Type":"ContainerStarted","Data":"b54f7a968bdeca53b7ee5bd158ba70814838e7ddf4d6670a50a0979a69a0fea3"} Jan 29 15:45:02 crc kubenswrapper[4620]: I0129 15:45:02.870031 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.008363 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc9e836e-74dc-41d7-b428-21c353eee738-secret-volume\") pod \"fc9e836e-74dc-41d7-b428-21c353eee738\" (UID: \"fc9e836e-74dc-41d7-b428-21c353eee738\") " Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.008462 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc9e836e-74dc-41d7-b428-21c353eee738-config-volume\") pod \"fc9e836e-74dc-41d7-b428-21c353eee738\" (UID: \"fc9e836e-74dc-41d7-b428-21c353eee738\") " Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.008572 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzfjq\" (UniqueName: \"kubernetes.io/projected/fc9e836e-74dc-41d7-b428-21c353eee738-kube-api-access-gzfjq\") pod \"fc9e836e-74dc-41d7-b428-21c353eee738\" (UID: \"fc9e836e-74dc-41d7-b428-21c353eee738\") " Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.009324 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9e836e-74dc-41d7-b428-21c353eee738-config-volume" (OuterVolumeSpecName: "config-volume") pod "fc9e836e-74dc-41d7-b428-21c353eee738" (UID: "fc9e836e-74dc-41d7-b428-21c353eee738"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.011194 4620 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc9e836e-74dc-41d7-b428-21c353eee738-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.013950 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9e836e-74dc-41d7-b428-21c353eee738-kube-api-access-gzfjq" (OuterVolumeSpecName: "kube-api-access-gzfjq") pod "fc9e836e-74dc-41d7-b428-21c353eee738" (UID: "fc9e836e-74dc-41d7-b428-21c353eee738"). InnerVolumeSpecName "kube-api-access-gzfjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.014943 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9e836e-74dc-41d7-b428-21c353eee738-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fc9e836e-74dc-41d7-b428-21c353eee738" (UID: "fc9e836e-74dc-41d7-b428-21c353eee738"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.113196 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzfjq\" (UniqueName: \"kubernetes.io/projected/fc9e836e-74dc-41d7-b428-21c353eee738-kube-api-access-gzfjq\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.113548 4620 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc9e836e-74dc-41d7-b428-21c353eee738-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.585704 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" event={"ID":"fc9e836e-74dc-41d7-b428-21c353eee738","Type":"ContainerDied","Data":"b54f7a968bdeca53b7ee5bd158ba70814838e7ddf4d6670a50a0979a69a0fea3"} Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.585744 4620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b54f7a968bdeca53b7ee5bd158ba70814838e7ddf4d6670a50a0979a69a0fea3" Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.586100 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-lgm5g" Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.946805 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9"] Jan 29 15:45:03 crc kubenswrapper[4620]: I0129 15:45:03.953338 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494980-dkmz9"] Jan 29 15:45:04 crc kubenswrapper[4620]: I0129 15:45:04.880713 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="713c81b7-8d56-4d58-bd4e-f827de0ca17b" path="/var/lib/kubelet/pods/713c81b7-8d56-4d58-bd4e-f827de0ca17b/volumes" Jan 29 15:45:08 crc kubenswrapper[4620]: E0129 15:45:08.005896 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:45:08 crc kubenswrapper[4620]: E0129 15:45:08.006195 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddc4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s6zv7_openshift-marketplace(11cdc57e-71d4-44ba-832d-58ad4bc76d2a): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:08 crc kubenswrapper[4620]: E0129 15:45:08.007378 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-s6zv7" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" Jan 29 15:45:09 crc kubenswrapper[4620]: E0129 15:45:09.874201 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.697483 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qn7jr"] Jan 29 15:45:10 crc kubenswrapper[4620]: E0129 15:45:10.697922 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9e836e-74dc-41d7-b428-21c353eee738" containerName="collect-profiles" Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.697943 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9e836e-74dc-41d7-b428-21c353eee738" containerName="collect-profiles" Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.698142 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9e836e-74dc-41d7-b428-21c353eee738" containerName="collect-profiles" Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.699588 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.755665 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qn7jr"] Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.790249 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7335a5e6-f470-4f8e-9816-f323736d2b5c-utilities\") pod \"certified-operators-qn7jr\" (UID: \"7335a5e6-f470-4f8e-9816-f323736d2b5c\") " pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.790373 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7335a5e6-f470-4f8e-9816-f323736d2b5c-catalog-content\") pod \"certified-operators-qn7jr\" (UID: \"7335a5e6-f470-4f8e-9816-f323736d2b5c\") " pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.790397 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q54t\" (UniqueName: \"kubernetes.io/projected/7335a5e6-f470-4f8e-9816-f323736d2b5c-kube-api-access-4q54t\") pod \"certified-operators-qn7jr\" (UID: \"7335a5e6-f470-4f8e-9816-f323736d2b5c\") " pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.891366 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7335a5e6-f470-4f8e-9816-f323736d2b5c-utilities\") pod \"certified-operators-qn7jr\" (UID: \"7335a5e6-f470-4f8e-9816-f323736d2b5c\") " pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.891461 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7335a5e6-f470-4f8e-9816-f323736d2b5c-catalog-content\") pod \"certified-operators-qn7jr\" (UID: \"7335a5e6-f470-4f8e-9816-f323736d2b5c\") " pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.891477 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q54t\" (UniqueName: \"kubernetes.io/projected/7335a5e6-f470-4f8e-9816-f323736d2b5c-kube-api-access-4q54t\") pod \"certified-operators-qn7jr\" (UID: \"7335a5e6-f470-4f8e-9816-f323736d2b5c\") " pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.892848 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7335a5e6-f470-4f8e-9816-f323736d2b5c-utilities\") pod \"certified-operators-qn7jr\" (UID: \"7335a5e6-f470-4f8e-9816-f323736d2b5c\") " pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.893268 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7335a5e6-f470-4f8e-9816-f323736d2b5c-catalog-content\") pod \"certified-operators-qn7jr\" (UID: \"7335a5e6-f470-4f8e-9816-f323736d2b5c\") " pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:45:10 crc kubenswrapper[4620]: I0129 15:45:10.913547 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q54t\" (UniqueName: \"kubernetes.io/projected/7335a5e6-f470-4f8e-9816-f323736d2b5c-kube-api-access-4q54t\") pod \"certified-operators-qn7jr\" (UID: \"7335a5e6-f470-4f8e-9816-f323736d2b5c\") " pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:45:11 crc kubenswrapper[4620]: I0129 15:45:11.065213 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:45:11 crc kubenswrapper[4620]: I0129 15:45:11.604171 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qn7jr"] Jan 29 15:45:11 crc kubenswrapper[4620]: I0129 15:45:11.638628 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn7jr" event={"ID":"7335a5e6-f470-4f8e-9816-f323736d2b5c","Type":"ContainerStarted","Data":"d363f077fdcc054ec111368bbfcc4bde500c02dc018d700aef52181fc4c21c8c"} Jan 29 15:45:12 crc kubenswrapper[4620]: E0129 15:45:12.040593 4620 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7335a5e6_f470_4f8e_9816_f323736d2b5c.slice/crio-conmon-3866b33355154790e4e73c3e816a0756327812a95fb976506d47b9aafb8be4d8.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:45:12 crc kubenswrapper[4620]: I0129 15:45:12.647420 4620 generic.go:334] "Generic (PLEG): container finished" podID="7335a5e6-f470-4f8e-9816-f323736d2b5c" containerID="3866b33355154790e4e73c3e816a0756327812a95fb976506d47b9aafb8be4d8" exitCode=0 Jan 29 15:45:12 crc kubenswrapper[4620]: I0129 15:45:12.647465 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn7jr" event={"ID":"7335a5e6-f470-4f8e-9816-f323736d2b5c","Type":"ContainerDied","Data":"3866b33355154790e4e73c3e816a0756327812a95fb976506d47b9aafb8be4d8"} Jan 29 15:45:12 crc kubenswrapper[4620]: E0129 15:45:12.771411 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:45:12 crc kubenswrapper[4620]: E0129 15:45:12.771574 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4q54t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qn7jr_openshift-marketplace(7335a5e6-f470-4f8e-9816-f323736d2b5c): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:12 crc kubenswrapper[4620]: E0129 15:45:12.772798 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-qn7jr" podUID="7335a5e6-f470-4f8e-9816-f323736d2b5c" Jan 29 15:45:13 crc kubenswrapper[4620]: E0129 15:45:13.656024 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qn7jr" podUID="7335a5e6-f470-4f8e-9816-f323736d2b5c" Jan 29 15:45:14 crc kubenswrapper[4620]: I0129 15:45:14.663201 4620 scope.go:117] "RemoveContainer" containerID="502515f1acee8bd32ea86f0d38418ac91f76362106c55fc1f4f18e1e3902ff4f" Jan 29 15:45:20 crc kubenswrapper[4620]: E0129 15:45:20.880157 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-s6zv7" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" Jan 29 15:45:21 crc kubenswrapper[4620]: E0129 15:45:21.874681 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:45:25 crc kubenswrapper[4620]: E0129 15:45:25.993943 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:45:25 crc kubenswrapper[4620]: E0129 15:45:25.995189 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4q54t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qn7jr_openshift-marketplace(7335a5e6-f470-4f8e-9816-f323736d2b5c): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:25 crc kubenswrapper[4620]: E0129 15:45:25.996389 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-qn7jr" podUID="7335a5e6-f470-4f8e-9816-f323736d2b5c" Jan 29 15:45:33 crc kubenswrapper[4620]: E0129 15:45:33.874564 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-s6zv7" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" Jan 29 15:45:35 crc kubenswrapper[4620]: E0129 15:45:35.993632 4620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:45:35 crc kubenswrapper[4620]: E0129 15:45:35.994714 4620 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29gkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-44m7n_openshift-marketplace(aa8615ea-eb4a-485e-af48-d166ccfb0062): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:35 crc kubenswrapper[4620]: E0129 15:45:35.996078 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:45:37 crc kubenswrapper[4620]: E0129 15:45:37.873644 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qn7jr" podUID="7335a5e6-f470-4f8e-9816-f323736d2b5c" Jan 29 15:45:46 crc kubenswrapper[4620]: E0129 15:45:46.874635 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-s6zv7" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" Jan 29 15:45:49 crc kubenswrapper[4620]: I0129 15:45:49.911915 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn7jr" event={"ID":"7335a5e6-f470-4f8e-9816-f323736d2b5c","Type":"ContainerStarted","Data":"a80f48bc517091c809a0e8eee9495a4a4329957691a3fc9f9e05d901657cfce3"} Jan 29 15:45:50 crc kubenswrapper[4620]: E0129 15:45:50.879235 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:45:50 crc kubenswrapper[4620]: I0129 15:45:50.920033 4620 generic.go:334] "Generic (PLEG): container finished" podID="7335a5e6-f470-4f8e-9816-f323736d2b5c" containerID="a80f48bc517091c809a0e8eee9495a4a4329957691a3fc9f9e05d901657cfce3" exitCode=0 Jan 29 15:45:50 crc kubenswrapper[4620]: I0129 15:45:50.920080 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn7jr" event={"ID":"7335a5e6-f470-4f8e-9816-f323736d2b5c","Type":"ContainerDied","Data":"a80f48bc517091c809a0e8eee9495a4a4329957691a3fc9f9e05d901657cfce3"} Jan 29 15:45:51 crc kubenswrapper[4620]: I0129 15:45:51.927721 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn7jr" event={"ID":"7335a5e6-f470-4f8e-9816-f323736d2b5c","Type":"ContainerStarted","Data":"34b051cea5c169f8c877614ce5c5c574b01c799976f79dfb244668cf1360e0b7"} Jan 29 15:46:00 crc kubenswrapper[4620]: I0129 15:46:00.983405 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zv7" event={"ID":"11cdc57e-71d4-44ba-832d-58ad4bc76d2a","Type":"ContainerStarted","Data":"133099d1a687f980c3785a6c134b2ff25b5166d306a86e35e377bbf8febc87b8"} Jan 29 15:46:01 crc kubenswrapper[4620]: I0129 15:46:01.005009 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qn7jr" podStartSLOduration=12.292349633 podStartE2EDuration="51.004990304s" podCreationTimestamp="2026-01-29 15:45:10 +0000 UTC" firstStartedPulling="2026-01-29 15:45:12.649614455 +0000 UTC m=+2653.262442120" lastFinishedPulling="2026-01-29 15:45:51.362255146 +0000 UTC m=+2691.975082791" observedRunningTime="2026-01-29 15:45:51.95293209 +0000 UTC m=+2692.565759735" watchObservedRunningTime="2026-01-29 15:46:01.004990304 +0000 UTC m=+2701.617817959" Jan 29 15:46:01 crc kubenswrapper[4620]: I0129 15:46:01.065650 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:46:01 crc kubenswrapper[4620]: I0129 15:46:01.065968 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:46:01 crc kubenswrapper[4620]: I0129 15:46:01.107394 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:46:01 crc kubenswrapper[4620]: I0129 15:46:01.994557 4620 generic.go:334] "Generic (PLEG): container finished" podID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" containerID="133099d1a687f980c3785a6c134b2ff25b5166d306a86e35e377bbf8febc87b8" exitCode=0 Jan 29 15:46:01 crc kubenswrapper[4620]: I0129 15:46:01.994749 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zv7" event={"ID":"11cdc57e-71d4-44ba-832d-58ad4bc76d2a","Type":"ContainerDied","Data":"133099d1a687f980c3785a6c134b2ff25b5166d306a86e35e377bbf8febc87b8"} Jan 29 15:46:02 crc kubenswrapper[4620]: I0129 15:46:02.067695 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:46:02 crc kubenswrapper[4620]: E0129 15:46:02.873666 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:46:03 crc kubenswrapper[4620]: I0129 15:46:03.424051 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qn7jr"] Jan 29 15:46:04 crc kubenswrapper[4620]: I0129 15:46:04.111349 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:46:04 crc kubenswrapper[4620]: I0129 15:46:04.111405 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:46:05 crc kubenswrapper[4620]: I0129 15:46:05.012958 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qn7jr" podUID="7335a5e6-f470-4f8e-9816-f323736d2b5c" containerName="registry-server" containerID="cri-o://34b051cea5c169f8c877614ce5c5c574b01c799976f79dfb244668cf1360e0b7" gracePeriod=2 Jan 29 15:46:06 crc kubenswrapper[4620]: I0129 15:46:06.023001 4620 generic.go:334] "Generic (PLEG): container finished" podID="7335a5e6-f470-4f8e-9816-f323736d2b5c" containerID="34b051cea5c169f8c877614ce5c5c574b01c799976f79dfb244668cf1360e0b7" exitCode=0 Jan 29 15:46:06 crc kubenswrapper[4620]: I0129 15:46:06.023041 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn7jr" event={"ID":"7335a5e6-f470-4f8e-9816-f323736d2b5c","Type":"ContainerDied","Data":"34b051cea5c169f8c877614ce5c5c574b01c799976f79dfb244668cf1360e0b7"} Jan 29 15:46:06 crc kubenswrapper[4620]: I0129 15:46:06.906512 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.032046 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zv7" event={"ID":"11cdc57e-71d4-44ba-832d-58ad4bc76d2a","Type":"ContainerStarted","Data":"4477a7d5759a9b66d2984c821987ba49a93d97ad0a9c9a1b8fcc3a194774109d"} Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.034486 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qn7jr" event={"ID":"7335a5e6-f470-4f8e-9816-f323736d2b5c","Type":"ContainerDied","Data":"d363f077fdcc054ec111368bbfcc4bde500c02dc018d700aef52181fc4c21c8c"} Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.034522 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qn7jr" Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.034552 4620 scope.go:117] "RemoveContainer" containerID="34b051cea5c169f8c877614ce5c5c574b01c799976f79dfb244668cf1360e0b7" Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.053622 4620 scope.go:117] "RemoveContainer" containerID="a80f48bc517091c809a0e8eee9495a4a4329957691a3fc9f9e05d901657cfce3" Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.054874 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s6zv7" podStartSLOduration=3.458698186 podStartE2EDuration="1m43.05485331s" podCreationTimestamp="2026-01-29 15:44:24 +0000 UTC" firstStartedPulling="2026-01-29 15:44:26.358929091 +0000 UTC m=+2606.971756776" lastFinishedPulling="2026-01-29 15:46:05.955084255 +0000 UTC m=+2706.567911900" observedRunningTime="2026-01-29 15:46:07.053133046 +0000 UTC m=+2707.665960691" watchObservedRunningTime="2026-01-29 15:46:07.05485331 +0000 UTC m=+2707.667680975" Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.073183 4620 scope.go:117] "RemoveContainer" containerID="3866b33355154790e4e73c3e816a0756327812a95fb976506d47b9aafb8be4d8" Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.093829 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q54t\" (UniqueName: \"kubernetes.io/projected/7335a5e6-f470-4f8e-9816-f323736d2b5c-kube-api-access-4q54t\") pod \"7335a5e6-f470-4f8e-9816-f323736d2b5c\" (UID: \"7335a5e6-f470-4f8e-9816-f323736d2b5c\") " Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.093867 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7335a5e6-f470-4f8e-9816-f323736d2b5c-catalog-content\") pod \"7335a5e6-f470-4f8e-9816-f323736d2b5c\" (UID: \"7335a5e6-f470-4f8e-9816-f323736d2b5c\") " Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.093988 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7335a5e6-f470-4f8e-9816-f323736d2b5c-utilities\") pod \"7335a5e6-f470-4f8e-9816-f323736d2b5c\" (UID: \"7335a5e6-f470-4f8e-9816-f323736d2b5c\") " Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.094817 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7335a5e6-f470-4f8e-9816-f323736d2b5c-utilities" (OuterVolumeSpecName: "utilities") pod "7335a5e6-f470-4f8e-9816-f323736d2b5c" (UID: "7335a5e6-f470-4f8e-9816-f323736d2b5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.100701 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7335a5e6-f470-4f8e-9816-f323736d2b5c-kube-api-access-4q54t" (OuterVolumeSpecName: "kube-api-access-4q54t") pod "7335a5e6-f470-4f8e-9816-f323736d2b5c" (UID: "7335a5e6-f470-4f8e-9816-f323736d2b5c"). InnerVolumeSpecName "kube-api-access-4q54t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.141289 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7335a5e6-f470-4f8e-9816-f323736d2b5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7335a5e6-f470-4f8e-9816-f323736d2b5c" (UID: "7335a5e6-f470-4f8e-9816-f323736d2b5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.195636 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7335a5e6-f470-4f8e-9816-f323736d2b5c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.196104 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q54t\" (UniqueName: \"kubernetes.io/projected/7335a5e6-f470-4f8e-9816-f323736d2b5c-kube-api-access-4q54t\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.196128 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7335a5e6-f470-4f8e-9816-f323736d2b5c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.367371 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qn7jr"] Jan 29 15:46:07 crc kubenswrapper[4620]: I0129 15:46:07.373152 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qn7jr"] Jan 29 15:46:08 crc kubenswrapper[4620]: I0129 15:46:08.880381 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7335a5e6-f470-4f8e-9816-f323736d2b5c" path="/var/lib/kubelet/pods/7335a5e6-f470-4f8e-9816-f323736d2b5c/volumes" Jan 29 15:46:15 crc kubenswrapper[4620]: I0129 15:46:15.160056 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:46:15 crc kubenswrapper[4620]: I0129 15:46:15.160635 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:46:15 crc kubenswrapper[4620]: I0129 15:46:15.201243 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:46:16 crc kubenswrapper[4620]: I0129 15:46:16.173986 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:46:16 crc kubenswrapper[4620]: I0129 15:46:16.228644 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6zv7"] Jan 29 15:46:17 crc kubenswrapper[4620]: E0129 15:46:17.874704 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:46:18 crc kubenswrapper[4620]: I0129 15:46:18.145459 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s6zv7" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" containerName="registry-server" containerID="cri-o://4477a7d5759a9b66d2984c821987ba49a93d97ad0a9c9a1b8fcc3a194774109d" gracePeriod=2 Jan 29 15:46:19 crc kubenswrapper[4620]: I0129 15:46:19.156704 4620 generic.go:334] "Generic (PLEG): container finished" podID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" containerID="4477a7d5759a9b66d2984c821987ba49a93d97ad0a9c9a1b8fcc3a194774109d" exitCode=0 Jan 29 15:46:19 crc kubenswrapper[4620]: I0129 15:46:19.156792 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zv7" event={"ID":"11cdc57e-71d4-44ba-832d-58ad4bc76d2a","Type":"ContainerDied","Data":"4477a7d5759a9b66d2984c821987ba49a93d97ad0a9c9a1b8fcc3a194774109d"} Jan 29 15:46:19 crc kubenswrapper[4620]: I0129 15:46:19.780716 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:46:19 crc kubenswrapper[4620]: I0129 15:46:19.808862 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddc4w\" (UniqueName: \"kubernetes.io/projected/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-kube-api-access-ddc4w\") pod \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\" (UID: \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\") " Jan 29 15:46:19 crc kubenswrapper[4620]: I0129 15:46:19.808938 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-catalog-content\") pod \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\" (UID: \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\") " Jan 29 15:46:19 crc kubenswrapper[4620]: I0129 15:46:19.808967 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-utilities\") pod \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\" (UID: \"11cdc57e-71d4-44ba-832d-58ad4bc76d2a\") " Jan 29 15:46:19 crc kubenswrapper[4620]: I0129 15:46:19.810021 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-utilities" (OuterVolumeSpecName: "utilities") pod "11cdc57e-71d4-44ba-832d-58ad4bc76d2a" (UID: "11cdc57e-71d4-44ba-832d-58ad4bc76d2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:19 crc kubenswrapper[4620]: I0129 15:46:19.823065 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-kube-api-access-ddc4w" (OuterVolumeSpecName: "kube-api-access-ddc4w") pod "11cdc57e-71d4-44ba-832d-58ad4bc76d2a" (UID: "11cdc57e-71d4-44ba-832d-58ad4bc76d2a"). InnerVolumeSpecName "kube-api-access-ddc4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:46:19 crc kubenswrapper[4620]: I0129 15:46:19.847608 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11cdc57e-71d4-44ba-832d-58ad4bc76d2a" (UID: "11cdc57e-71d4-44ba-832d-58ad4bc76d2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:46:19 crc kubenswrapper[4620]: I0129 15:46:19.910472 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddc4w\" (UniqueName: \"kubernetes.io/projected/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-kube-api-access-ddc4w\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:19 crc kubenswrapper[4620]: I0129 15:46:19.910528 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:19 crc kubenswrapper[4620]: I0129 15:46:19.910538 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11cdc57e-71d4-44ba-832d-58ad4bc76d2a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:46:20 crc kubenswrapper[4620]: I0129 15:46:20.167933 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6zv7" event={"ID":"11cdc57e-71d4-44ba-832d-58ad4bc76d2a","Type":"ContainerDied","Data":"02f5d8704bf703fcb0129e44036b7230ca4e5e1b1a9edad04f5fe57e81abb8f0"} Jan 29 15:46:20 crc kubenswrapper[4620]: I0129 15:46:20.168041 4620 scope.go:117] "RemoveContainer" containerID="4477a7d5759a9b66d2984c821987ba49a93d97ad0a9c9a1b8fcc3a194774109d" Jan 29 15:46:20 crc kubenswrapper[4620]: I0129 15:46:20.168305 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6zv7" Jan 29 15:46:20 crc kubenswrapper[4620]: I0129 15:46:20.191084 4620 scope.go:117] "RemoveContainer" containerID="133099d1a687f980c3785a6c134b2ff25b5166d306a86e35e377bbf8febc87b8" Jan 29 15:46:20 crc kubenswrapper[4620]: I0129 15:46:20.221748 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6zv7"] Jan 29 15:46:20 crc kubenswrapper[4620]: I0129 15:46:20.228053 4620 scope.go:117] "RemoveContainer" containerID="d9ea0e08891fbdeee9aa2c09ac6a69ce2650893d5ed86bcbe294f0d41e952706" Jan 29 15:46:20 crc kubenswrapper[4620]: I0129 15:46:20.230719 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6zv7"] Jan 29 15:46:20 crc kubenswrapper[4620]: I0129 15:46:20.881146 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" path="/var/lib/kubelet/pods/11cdc57e-71d4-44ba-832d-58ad4bc76d2a/volumes" Jan 29 15:46:30 crc kubenswrapper[4620]: E0129 15:46:30.878137 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:46:34 crc kubenswrapper[4620]: I0129 15:46:34.111269 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:46:34 crc kubenswrapper[4620]: I0129 15:46:34.111328 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.626831 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k6pkh"] Jan 29 15:46:42 crc kubenswrapper[4620]: E0129 15:46:42.627563 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" containerName="registry-server" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.627581 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" containerName="registry-server" Jan 29 15:46:42 crc kubenswrapper[4620]: E0129 15:46:42.627600 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" containerName="extract-utilities" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.627606 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" containerName="extract-utilities" Jan 29 15:46:42 crc kubenswrapper[4620]: E0129 15:46:42.627619 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7335a5e6-f470-4f8e-9816-f323736d2b5c" containerName="extract-utilities" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.627627 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="7335a5e6-f470-4f8e-9816-f323736d2b5c" containerName="extract-utilities" Jan 29 15:46:42 crc kubenswrapper[4620]: E0129 15:46:42.627655 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" containerName="extract-content" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.627662 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" containerName="extract-content" Jan 29 15:46:42 crc kubenswrapper[4620]: E0129 15:46:42.627671 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7335a5e6-f470-4f8e-9816-f323736d2b5c" containerName="registry-server" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.627676 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="7335a5e6-f470-4f8e-9816-f323736d2b5c" containerName="registry-server" Jan 29 15:46:42 crc kubenswrapper[4620]: E0129 15:46:42.627700 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7335a5e6-f470-4f8e-9816-f323736d2b5c" containerName="extract-content" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.627707 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="7335a5e6-f470-4f8e-9816-f323736d2b5c" containerName="extract-content" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.627896 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="11cdc57e-71d4-44ba-832d-58ad4bc76d2a" containerName="registry-server" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.627910 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="7335a5e6-f470-4f8e-9816-f323736d2b5c" containerName="registry-server" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.628806 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.654947 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k6pkh"] Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.823643 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d1d409-1b69-4687-98d9-801134a8f3f5-catalog-content\") pod \"redhat-operators-k6pkh\" (UID: \"38d1d409-1b69-4687-98d9-801134a8f3f5\") " pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.823791 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d1d409-1b69-4687-98d9-801134a8f3f5-utilities\") pod \"redhat-operators-k6pkh\" (UID: \"38d1d409-1b69-4687-98d9-801134a8f3f5\") " pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.823826 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdw5n\" (UniqueName: \"kubernetes.io/projected/38d1d409-1b69-4687-98d9-801134a8f3f5-kube-api-access-xdw5n\") pod \"redhat-operators-k6pkh\" (UID: \"38d1d409-1b69-4687-98d9-801134a8f3f5\") " pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:46:42 crc kubenswrapper[4620]: E0129 15:46:42.874914 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.924731 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d1d409-1b69-4687-98d9-801134a8f3f5-catalog-content\") pod \"redhat-operators-k6pkh\" (UID: \"38d1d409-1b69-4687-98d9-801134a8f3f5\") " pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.924869 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d1d409-1b69-4687-98d9-801134a8f3f5-utilities\") pod \"redhat-operators-k6pkh\" (UID: \"38d1d409-1b69-4687-98d9-801134a8f3f5\") " pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.924892 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdw5n\" (UniqueName: \"kubernetes.io/projected/38d1d409-1b69-4687-98d9-801134a8f3f5-kube-api-access-xdw5n\") pod \"redhat-operators-k6pkh\" (UID: \"38d1d409-1b69-4687-98d9-801134a8f3f5\") " pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.925309 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d1d409-1b69-4687-98d9-801134a8f3f5-catalog-content\") pod \"redhat-operators-k6pkh\" (UID: \"38d1d409-1b69-4687-98d9-801134a8f3f5\") " pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.925455 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d1d409-1b69-4687-98d9-801134a8f3f5-utilities\") pod \"redhat-operators-k6pkh\" (UID: \"38d1d409-1b69-4687-98d9-801134a8f3f5\") " pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.944157 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdw5n\" (UniqueName: \"kubernetes.io/projected/38d1d409-1b69-4687-98d9-801134a8f3f5-kube-api-access-xdw5n\") pod \"redhat-operators-k6pkh\" (UID: \"38d1d409-1b69-4687-98d9-801134a8f3f5\") " pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:46:42 crc kubenswrapper[4620]: I0129 15:46:42.949614 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:46:43 crc kubenswrapper[4620]: I0129 15:46:43.419603 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k6pkh"] Jan 29 15:46:44 crc kubenswrapper[4620]: I0129 15:46:44.329452 4620 generic.go:334] "Generic (PLEG): container finished" podID="38d1d409-1b69-4687-98d9-801134a8f3f5" containerID="35e2a41cc319b84aab49d14e792c47523a767c3b0f533822a6de7af1e209a378" exitCode=0 Jan 29 15:46:44 crc kubenswrapper[4620]: I0129 15:46:44.329571 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6pkh" event={"ID":"38d1d409-1b69-4687-98d9-801134a8f3f5","Type":"ContainerDied","Data":"35e2a41cc319b84aab49d14e792c47523a767c3b0f533822a6de7af1e209a378"} Jan 29 15:46:44 crc kubenswrapper[4620]: I0129 15:46:44.330040 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6pkh" event={"ID":"38d1d409-1b69-4687-98d9-801134a8f3f5","Type":"ContainerStarted","Data":"08e4dda8520cc2a3b151762c4f7aaf15f3f5c83cc688234c4653c53095c97867"} Jan 29 15:46:48 crc kubenswrapper[4620]: I0129 15:46:48.367277 4620 generic.go:334] "Generic (PLEG): container finished" podID="38d1d409-1b69-4687-98d9-801134a8f3f5" containerID="2a6d37fb61572b2dc80a0bb2985192a7920f6d0a698f207452fe1947cdc15d36" exitCode=0 Jan 29 15:46:48 crc kubenswrapper[4620]: I0129 15:46:48.367725 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6pkh" event={"ID":"38d1d409-1b69-4687-98d9-801134a8f3f5","Type":"ContainerDied","Data":"2a6d37fb61572b2dc80a0bb2985192a7920f6d0a698f207452fe1947cdc15d36"} Jan 29 15:46:50 crc kubenswrapper[4620]: I0129 15:46:50.422499 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6pkh" event={"ID":"38d1d409-1b69-4687-98d9-801134a8f3f5","Type":"ContainerStarted","Data":"cc1694b2c3d572b410e511bc84d3a792827e53eb92e84aedcf8b8e4eb7eda517"} Jan 29 15:46:50 crc kubenswrapper[4620]: I0129 15:46:50.438973 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k6pkh" podStartSLOduration=4.661390001 podStartE2EDuration="8.438955949s" podCreationTimestamp="2026-01-29 15:46:42 +0000 UTC" firstStartedPulling="2026-01-29 15:46:45.33825053 +0000 UTC m=+2745.951078195" lastFinishedPulling="2026-01-29 15:46:49.115816498 +0000 UTC m=+2749.728644143" observedRunningTime="2026-01-29 15:46:50.435892634 +0000 UTC m=+2751.048720299" watchObservedRunningTime="2026-01-29 15:46:50.438955949 +0000 UTC m=+2751.051783604" Jan 29 15:46:52 crc kubenswrapper[4620]: I0129 15:46:52.950473 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:46:52 crc kubenswrapper[4620]: I0129 15:46:52.950528 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:46:54 crc kubenswrapper[4620]: I0129 15:46:54.011044 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k6pkh" podUID="38d1d409-1b69-4687-98d9-801134a8f3f5" containerName="registry-server" probeResult="failure" output=< Jan 29 15:46:54 crc kubenswrapper[4620]: timeout: failed to connect service ":50051" within 1s Jan 29 15:46:54 crc kubenswrapper[4620]: > Jan 29 15:46:57 crc kubenswrapper[4620]: E0129 15:46:57.874299 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:47:02 crc kubenswrapper[4620]: I0129 15:47:02.988286 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:47:03 crc kubenswrapper[4620]: I0129 15:47:03.031124 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:47:03 crc kubenswrapper[4620]: I0129 15:47:03.221442 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k6pkh"] Jan 29 15:47:04 crc kubenswrapper[4620]: I0129 15:47:04.111377 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:47:04 crc kubenswrapper[4620]: I0129 15:47:04.111445 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:47:04 crc kubenswrapper[4620]: I0129 15:47:04.111489 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:47:04 crc kubenswrapper[4620]: I0129 15:47:04.112211 4620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0363493f5282aa072e0d3aa4dfc81106b236f93cd58f3bf51671a562d1114bd3"} pod="openshift-machine-config-operator/machine-config-daemon-7469t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:47:04 crc kubenswrapper[4620]: I0129 15:47:04.112273 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" containerID="cri-o://0363493f5282aa072e0d3aa4dfc81106b236f93cd58f3bf51671a562d1114bd3" gracePeriod=600 Jan 29 15:47:04 crc kubenswrapper[4620]: I0129 15:47:04.509885 4620 generic.go:334] "Generic (PLEG): container finished" podID="a76cce43-3d01-4158-b23a-e21fd5927792" containerID="0363493f5282aa072e0d3aa4dfc81106b236f93cd58f3bf51671a562d1114bd3" exitCode=0 Jan 29 15:47:04 crc kubenswrapper[4620]: I0129 15:47:04.510050 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerDied","Data":"0363493f5282aa072e0d3aa4dfc81106b236f93cd58f3bf51671a562d1114bd3"} Jan 29 15:47:04 crc kubenswrapper[4620]: I0129 15:47:04.510204 4620 scope.go:117] "RemoveContainer" containerID="9ba2a7b104fa2d2f13ad3de02aa6ee8c0c4f5cf01833710d44ab955e5273bb19" Jan 29 15:47:04 crc kubenswrapper[4620]: I0129 15:47:04.510362 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k6pkh" podUID="38d1d409-1b69-4687-98d9-801134a8f3f5" containerName="registry-server" containerID="cri-o://cc1694b2c3d572b410e511bc84d3a792827e53eb92e84aedcf8b8e4eb7eda517" gracePeriod=2 Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.432688 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.461532 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d1d409-1b69-4687-98d9-801134a8f3f5-utilities\") pod \"38d1d409-1b69-4687-98d9-801134a8f3f5\" (UID: \"38d1d409-1b69-4687-98d9-801134a8f3f5\") " Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.461615 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdw5n\" (UniqueName: \"kubernetes.io/projected/38d1d409-1b69-4687-98d9-801134a8f3f5-kube-api-access-xdw5n\") pod \"38d1d409-1b69-4687-98d9-801134a8f3f5\" (UID: \"38d1d409-1b69-4687-98d9-801134a8f3f5\") " Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.461679 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d1d409-1b69-4687-98d9-801134a8f3f5-catalog-content\") pod \"38d1d409-1b69-4687-98d9-801134a8f3f5\" (UID: \"38d1d409-1b69-4687-98d9-801134a8f3f5\") " Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.463597 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38d1d409-1b69-4687-98d9-801134a8f3f5-utilities" (OuterVolumeSpecName: "utilities") pod "38d1d409-1b69-4687-98d9-801134a8f3f5" (UID: "38d1d409-1b69-4687-98d9-801134a8f3f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.502858 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d1d409-1b69-4687-98d9-801134a8f3f5-kube-api-access-xdw5n" (OuterVolumeSpecName: "kube-api-access-xdw5n") pod "38d1d409-1b69-4687-98d9-801134a8f3f5" (UID: "38d1d409-1b69-4687-98d9-801134a8f3f5"). InnerVolumeSpecName "kube-api-access-xdw5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.543783 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699"} Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.549336 4620 generic.go:334] "Generic (PLEG): container finished" podID="38d1d409-1b69-4687-98d9-801134a8f3f5" containerID="cc1694b2c3d572b410e511bc84d3a792827e53eb92e84aedcf8b8e4eb7eda517" exitCode=0 Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.549394 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k6pkh" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.549393 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6pkh" event={"ID":"38d1d409-1b69-4687-98d9-801134a8f3f5","Type":"ContainerDied","Data":"cc1694b2c3d572b410e511bc84d3a792827e53eb92e84aedcf8b8e4eb7eda517"} Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.549578 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6pkh" event={"ID":"38d1d409-1b69-4687-98d9-801134a8f3f5","Type":"ContainerDied","Data":"08e4dda8520cc2a3b151762c4f7aaf15f3f5c83cc688234c4653c53095c97867"} Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.549600 4620 scope.go:117] "RemoveContainer" containerID="cc1694b2c3d572b410e511bc84d3a792827e53eb92e84aedcf8b8e4eb7eda517" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.563646 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38d1d409-1b69-4687-98d9-801134a8f3f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.563679 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdw5n\" (UniqueName: \"kubernetes.io/projected/38d1d409-1b69-4687-98d9-801134a8f3f5-kube-api-access-xdw5n\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.578588 4620 scope.go:117] "RemoveContainer" containerID="2a6d37fb61572b2dc80a0bb2985192a7920f6d0a698f207452fe1947cdc15d36" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.600932 4620 scope.go:117] "RemoveContainer" containerID="35e2a41cc319b84aab49d14e792c47523a767c3b0f533822a6de7af1e209a378" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.607991 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38d1d409-1b69-4687-98d9-801134a8f3f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38d1d409-1b69-4687-98d9-801134a8f3f5" (UID: "38d1d409-1b69-4687-98d9-801134a8f3f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.617026 4620 scope.go:117] "RemoveContainer" containerID="cc1694b2c3d572b410e511bc84d3a792827e53eb92e84aedcf8b8e4eb7eda517" Jan 29 15:47:05 crc kubenswrapper[4620]: E0129 15:47:05.617521 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc1694b2c3d572b410e511bc84d3a792827e53eb92e84aedcf8b8e4eb7eda517\": container with ID starting with cc1694b2c3d572b410e511bc84d3a792827e53eb92e84aedcf8b8e4eb7eda517 not found: ID does not exist" containerID="cc1694b2c3d572b410e511bc84d3a792827e53eb92e84aedcf8b8e4eb7eda517" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.617554 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc1694b2c3d572b410e511bc84d3a792827e53eb92e84aedcf8b8e4eb7eda517"} err="failed to get container status \"cc1694b2c3d572b410e511bc84d3a792827e53eb92e84aedcf8b8e4eb7eda517\": rpc error: code = NotFound desc = could not find container \"cc1694b2c3d572b410e511bc84d3a792827e53eb92e84aedcf8b8e4eb7eda517\": container with ID starting with cc1694b2c3d572b410e511bc84d3a792827e53eb92e84aedcf8b8e4eb7eda517 not found: ID does not exist" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.617579 4620 scope.go:117] "RemoveContainer" containerID="2a6d37fb61572b2dc80a0bb2985192a7920f6d0a698f207452fe1947cdc15d36" Jan 29 15:47:05 crc kubenswrapper[4620]: E0129 15:47:05.618106 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a6d37fb61572b2dc80a0bb2985192a7920f6d0a698f207452fe1947cdc15d36\": container with ID starting with 2a6d37fb61572b2dc80a0bb2985192a7920f6d0a698f207452fe1947cdc15d36 not found: ID does not exist" containerID="2a6d37fb61572b2dc80a0bb2985192a7920f6d0a698f207452fe1947cdc15d36" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.618131 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a6d37fb61572b2dc80a0bb2985192a7920f6d0a698f207452fe1947cdc15d36"} err="failed to get container status \"2a6d37fb61572b2dc80a0bb2985192a7920f6d0a698f207452fe1947cdc15d36\": rpc error: code = NotFound desc = could not find container \"2a6d37fb61572b2dc80a0bb2985192a7920f6d0a698f207452fe1947cdc15d36\": container with ID starting with 2a6d37fb61572b2dc80a0bb2985192a7920f6d0a698f207452fe1947cdc15d36 not found: ID does not exist" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.618146 4620 scope.go:117] "RemoveContainer" containerID="35e2a41cc319b84aab49d14e792c47523a767c3b0f533822a6de7af1e209a378" Jan 29 15:47:05 crc kubenswrapper[4620]: E0129 15:47:05.618517 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35e2a41cc319b84aab49d14e792c47523a767c3b0f533822a6de7af1e209a378\": container with ID starting with 35e2a41cc319b84aab49d14e792c47523a767c3b0f533822a6de7af1e209a378 not found: ID does not exist" containerID="35e2a41cc319b84aab49d14e792c47523a767c3b0f533822a6de7af1e209a378" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.618537 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e2a41cc319b84aab49d14e792c47523a767c3b0f533822a6de7af1e209a378"} err="failed to get container status \"35e2a41cc319b84aab49d14e792c47523a767c3b0f533822a6de7af1e209a378\": rpc error: code = NotFound desc = could not find container \"35e2a41cc319b84aab49d14e792c47523a767c3b0f533822a6de7af1e209a378\": container with ID starting with 35e2a41cc319b84aab49d14e792c47523a767c3b0f533822a6de7af1e209a378 not found: ID does not exist" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.664657 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38d1d409-1b69-4687-98d9-801134a8f3f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.886246 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k6pkh"] Jan 29 15:47:05 crc kubenswrapper[4620]: I0129 15:47:05.892120 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k6pkh"] Jan 29 15:47:06 crc kubenswrapper[4620]: I0129 15:47:06.879879 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38d1d409-1b69-4687-98d9-801134a8f3f5" path="/var/lib/kubelet/pods/38d1d409-1b69-4687-98d9-801134a8f3f5/volumes" Jan 29 15:47:09 crc kubenswrapper[4620]: E0129 15:47:09.875859 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:47:24 crc kubenswrapper[4620]: E0129 15:47:24.875510 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:47:39 crc kubenswrapper[4620]: E0129 15:47:39.874955 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:47:54 crc kubenswrapper[4620]: E0129 15:47:54.874030 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:48:07 crc kubenswrapper[4620]: E0129 15:48:07.876545 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" Jan 29 15:48:19 crc kubenswrapper[4620]: I0129 15:48:19.876191 4620 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:48:21 crc kubenswrapper[4620]: I0129 15:48:21.094613 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44m7n" event={"ID":"aa8615ea-eb4a-485e-af48-d166ccfb0062","Type":"ContainerStarted","Data":"96278ba2e5a31176f7cf70111f873ce88d0fc473d10299fa546343357d16382e"} Jan 29 15:48:24 crc kubenswrapper[4620]: I0129 15:48:24.121191 4620 generic.go:334] "Generic (PLEG): container finished" podID="aa8615ea-eb4a-485e-af48-d166ccfb0062" containerID="96278ba2e5a31176f7cf70111f873ce88d0fc473d10299fa546343357d16382e" exitCode=0 Jan 29 15:48:24 crc kubenswrapper[4620]: I0129 15:48:24.121483 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44m7n" event={"ID":"aa8615ea-eb4a-485e-af48-d166ccfb0062","Type":"ContainerDied","Data":"96278ba2e5a31176f7cf70111f873ce88d0fc473d10299fa546343357d16382e"} Jan 29 15:48:25 crc kubenswrapper[4620]: I0129 15:48:25.129936 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44m7n" event={"ID":"aa8615ea-eb4a-485e-af48-d166ccfb0062","Type":"ContainerStarted","Data":"16eb1fda313d726149cbee0b78e20ccadbb02e46a4d79b46176a3228c06a0325"} Jan 29 15:48:25 crc kubenswrapper[4620]: I0129 15:48:25.151580 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-44m7n" podStartSLOduration=3.029590473 podStartE2EDuration="5m43.151557659s" podCreationTimestamp="2026-01-29 15:42:42 +0000 UTC" firstStartedPulling="2026-01-29 15:42:44.60028584 +0000 UTC m=+2505.213113495" lastFinishedPulling="2026-01-29 15:48:24.722253036 +0000 UTC m=+2845.335080681" observedRunningTime="2026-01-29 15:48:25.145735308 +0000 UTC m=+2845.758562973" watchObservedRunningTime="2026-01-29 15:48:25.151557659 +0000 UTC m=+2845.764385304" Jan 29 15:48:33 crc kubenswrapper[4620]: I0129 15:48:33.263735 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:48:33 crc kubenswrapper[4620]: I0129 15:48:33.264598 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:48:33 crc kubenswrapper[4620]: I0129 15:48:33.327539 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:48:34 crc kubenswrapper[4620]: I0129 15:48:34.244345 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:48:34 crc kubenswrapper[4620]: I0129 15:48:34.305084 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-44m7n"] Jan 29 15:48:36 crc kubenswrapper[4620]: I0129 15:48:36.219419 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-44m7n" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" containerName="registry-server" containerID="cri-o://16eb1fda313d726149cbee0b78e20ccadbb02e46a4d79b46176a3228c06a0325" gracePeriod=2 Jan 29 15:48:36 crc kubenswrapper[4620]: I0129 15:48:36.636001 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:48:36 crc kubenswrapper[4620]: I0129 15:48:36.772426 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8615ea-eb4a-485e-af48-d166ccfb0062-utilities\") pod \"aa8615ea-eb4a-485e-af48-d166ccfb0062\" (UID: \"aa8615ea-eb4a-485e-af48-d166ccfb0062\") " Jan 29 15:48:36 crc kubenswrapper[4620]: I0129 15:48:36.772591 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29gkh\" (UniqueName: \"kubernetes.io/projected/aa8615ea-eb4a-485e-af48-d166ccfb0062-kube-api-access-29gkh\") pod \"aa8615ea-eb4a-485e-af48-d166ccfb0062\" (UID: \"aa8615ea-eb4a-485e-af48-d166ccfb0062\") " Jan 29 15:48:36 crc kubenswrapper[4620]: I0129 15:48:36.772641 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8615ea-eb4a-485e-af48-d166ccfb0062-catalog-content\") pod \"aa8615ea-eb4a-485e-af48-d166ccfb0062\" (UID: \"aa8615ea-eb4a-485e-af48-d166ccfb0062\") " Jan 29 15:48:36 crc kubenswrapper[4620]: I0129 15:48:36.773292 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa8615ea-eb4a-485e-af48-d166ccfb0062-utilities" (OuterVolumeSpecName: "utilities") pod "aa8615ea-eb4a-485e-af48-d166ccfb0062" (UID: "aa8615ea-eb4a-485e-af48-d166ccfb0062"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:48:36 crc kubenswrapper[4620]: I0129 15:48:36.781449 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa8615ea-eb4a-485e-af48-d166ccfb0062-kube-api-access-29gkh" (OuterVolumeSpecName: "kube-api-access-29gkh") pod "aa8615ea-eb4a-485e-af48-d166ccfb0062" (UID: "aa8615ea-eb4a-485e-af48-d166ccfb0062"). InnerVolumeSpecName "kube-api-access-29gkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:48:36 crc kubenswrapper[4620]: I0129 15:48:36.822435 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa8615ea-eb4a-485e-af48-d166ccfb0062-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa8615ea-eb4a-485e-af48-d166ccfb0062" (UID: "aa8615ea-eb4a-485e-af48-d166ccfb0062"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:48:36 crc kubenswrapper[4620]: I0129 15:48:36.874279 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29gkh\" (UniqueName: \"kubernetes.io/projected/aa8615ea-eb4a-485e-af48-d166ccfb0062-kube-api-access-29gkh\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:36 crc kubenswrapper[4620]: I0129 15:48:36.874307 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa8615ea-eb4a-485e-af48-d166ccfb0062-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:36 crc kubenswrapper[4620]: I0129 15:48:36.874317 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa8615ea-eb4a-485e-af48-d166ccfb0062-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.232581 4620 generic.go:334] "Generic (PLEG): container finished" podID="aa8615ea-eb4a-485e-af48-d166ccfb0062" containerID="16eb1fda313d726149cbee0b78e20ccadbb02e46a4d79b46176a3228c06a0325" exitCode=0 Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.232631 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44m7n" event={"ID":"aa8615ea-eb4a-485e-af48-d166ccfb0062","Type":"ContainerDied","Data":"16eb1fda313d726149cbee0b78e20ccadbb02e46a4d79b46176a3228c06a0325"} Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.232638 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44m7n" Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.232671 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44m7n" event={"ID":"aa8615ea-eb4a-485e-af48-d166ccfb0062","Type":"ContainerDied","Data":"8542697d5a93b24bf2f7681bccf17674f1dc529dbfe26ef2ab9823574b0c6224"} Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.232690 4620 scope.go:117] "RemoveContainer" containerID="16eb1fda313d726149cbee0b78e20ccadbb02e46a4d79b46176a3228c06a0325" Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.256201 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-44m7n"] Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.261730 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-44m7n"] Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.263021 4620 scope.go:117] "RemoveContainer" containerID="96278ba2e5a31176f7cf70111f873ce88d0fc473d10299fa546343357d16382e" Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.285542 4620 scope.go:117] "RemoveContainer" containerID="8c4896c94cce28c248bc5bdfcf67053a4143234d7dfae664ccd596ce3a061313" Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.313929 4620 scope.go:117] "RemoveContainer" containerID="16eb1fda313d726149cbee0b78e20ccadbb02e46a4d79b46176a3228c06a0325" Jan 29 15:48:37 crc kubenswrapper[4620]: E0129 15:48:37.314620 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16eb1fda313d726149cbee0b78e20ccadbb02e46a4d79b46176a3228c06a0325\": container with ID starting with 16eb1fda313d726149cbee0b78e20ccadbb02e46a4d79b46176a3228c06a0325 not found: ID does not exist" containerID="16eb1fda313d726149cbee0b78e20ccadbb02e46a4d79b46176a3228c06a0325" Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.314674 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16eb1fda313d726149cbee0b78e20ccadbb02e46a4d79b46176a3228c06a0325"} err="failed to get container status \"16eb1fda313d726149cbee0b78e20ccadbb02e46a4d79b46176a3228c06a0325\": rpc error: code = NotFound desc = could not find container \"16eb1fda313d726149cbee0b78e20ccadbb02e46a4d79b46176a3228c06a0325\": container with ID starting with 16eb1fda313d726149cbee0b78e20ccadbb02e46a4d79b46176a3228c06a0325 not found: ID does not exist" Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.314693 4620 scope.go:117] "RemoveContainer" containerID="96278ba2e5a31176f7cf70111f873ce88d0fc473d10299fa546343357d16382e" Jan 29 15:48:37 crc kubenswrapper[4620]: E0129 15:48:37.315057 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96278ba2e5a31176f7cf70111f873ce88d0fc473d10299fa546343357d16382e\": container with ID starting with 96278ba2e5a31176f7cf70111f873ce88d0fc473d10299fa546343357d16382e not found: ID does not exist" containerID="96278ba2e5a31176f7cf70111f873ce88d0fc473d10299fa546343357d16382e" Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.315100 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96278ba2e5a31176f7cf70111f873ce88d0fc473d10299fa546343357d16382e"} err="failed to get container status \"96278ba2e5a31176f7cf70111f873ce88d0fc473d10299fa546343357d16382e\": rpc error: code = NotFound desc = could not find container \"96278ba2e5a31176f7cf70111f873ce88d0fc473d10299fa546343357d16382e\": container with ID starting with 96278ba2e5a31176f7cf70111f873ce88d0fc473d10299fa546343357d16382e not found: ID does not exist" Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.315126 4620 scope.go:117] "RemoveContainer" containerID="8c4896c94cce28c248bc5bdfcf67053a4143234d7dfae664ccd596ce3a061313" Jan 29 15:48:37 crc kubenswrapper[4620]: E0129 15:48:37.315452 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c4896c94cce28c248bc5bdfcf67053a4143234d7dfae664ccd596ce3a061313\": container with ID starting with 8c4896c94cce28c248bc5bdfcf67053a4143234d7dfae664ccd596ce3a061313 not found: ID does not exist" containerID="8c4896c94cce28c248bc5bdfcf67053a4143234d7dfae664ccd596ce3a061313" Jan 29 15:48:37 crc kubenswrapper[4620]: I0129 15:48:37.315483 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c4896c94cce28c248bc5bdfcf67053a4143234d7dfae664ccd596ce3a061313"} err="failed to get container status \"8c4896c94cce28c248bc5bdfcf67053a4143234d7dfae664ccd596ce3a061313\": rpc error: code = NotFound desc = could not find container \"8c4896c94cce28c248bc5bdfcf67053a4143234d7dfae664ccd596ce3a061313\": container with ID starting with 8c4896c94cce28c248bc5bdfcf67053a4143234d7dfae664ccd596ce3a061313 not found: ID does not exist" Jan 29 15:48:38 crc kubenswrapper[4620]: I0129 15:48:38.891385 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" path="/var/lib/kubelet/pods/aa8615ea-eb4a-485e-af48-d166ccfb0062/volumes" Jan 29 15:49:04 crc kubenswrapper[4620]: I0129 15:49:04.110970 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:49:04 crc kubenswrapper[4620]: I0129 15:49:04.111348 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:49:34 crc kubenswrapper[4620]: I0129 15:49:34.111582 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:49:34 crc kubenswrapper[4620]: I0129 15:49:34.112106 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:50:04 crc kubenswrapper[4620]: I0129 15:50:04.111006 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:50:04 crc kubenswrapper[4620]: I0129 15:50:04.113120 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:50:04 crc kubenswrapper[4620]: I0129 15:50:04.113303 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:50:04 crc kubenswrapper[4620]: I0129 15:50:04.114191 4620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699"} pod="openshift-machine-config-operator/machine-config-daemon-7469t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:50:04 crc kubenswrapper[4620]: I0129 15:50:04.114378 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" containerID="cri-o://6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" gracePeriod=600 Jan 29 15:50:04 crc kubenswrapper[4620]: E0129 15:50:04.240338 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:50:04 crc kubenswrapper[4620]: I0129 15:50:04.534673 4620 generic.go:334] "Generic (PLEG): container finished" podID="a76cce43-3d01-4158-b23a-e21fd5927792" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" exitCode=0 Jan 29 15:50:04 crc kubenswrapper[4620]: I0129 15:50:04.534716 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerDied","Data":"6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699"} Jan 29 15:50:04 crc kubenswrapper[4620]: I0129 15:50:04.534780 4620 scope.go:117] "RemoveContainer" containerID="0363493f5282aa072e0d3aa4dfc81106b236f93cd58f3bf51671a562d1114bd3" Jan 29 15:50:04 crc kubenswrapper[4620]: I0129 15:50:04.535880 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:50:04 crc kubenswrapper[4620]: E0129 15:50:04.536149 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:50:14 crc kubenswrapper[4620]: I0129 15:50:14.873203 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:50:14 crc kubenswrapper[4620]: E0129 15:50:14.874257 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.242496 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v4wjc/must-gather-hjkxw"] Jan 29 15:50:29 crc kubenswrapper[4620]: E0129 15:50:29.243380 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d1d409-1b69-4687-98d9-801134a8f3f5" containerName="extract-content" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.243394 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d1d409-1b69-4687-98d9-801134a8f3f5" containerName="extract-content" Jan 29 15:50:29 crc kubenswrapper[4620]: E0129 15:50:29.243411 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" containerName="extract-utilities" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.243417 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" containerName="extract-utilities" Jan 29 15:50:29 crc kubenswrapper[4620]: E0129 15:50:29.243427 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" containerName="extract-content" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.243434 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" containerName="extract-content" Jan 29 15:50:29 crc kubenswrapper[4620]: E0129 15:50:29.243442 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" containerName="registry-server" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.243448 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" containerName="registry-server" Jan 29 15:50:29 crc kubenswrapper[4620]: E0129 15:50:29.243458 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d1d409-1b69-4687-98d9-801134a8f3f5" containerName="registry-server" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.243463 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d1d409-1b69-4687-98d9-801134a8f3f5" containerName="registry-server" Jan 29 15:50:29 crc kubenswrapper[4620]: E0129 15:50:29.243474 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d1d409-1b69-4687-98d9-801134a8f3f5" containerName="extract-utilities" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.243480 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d1d409-1b69-4687-98d9-801134a8f3f5" containerName="extract-utilities" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.243591 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa8615ea-eb4a-485e-af48-d166ccfb0062" containerName="registry-server" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.243607 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d1d409-1b69-4687-98d9-801134a8f3f5" containerName="registry-server" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.244333 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4wjc/must-gather-hjkxw" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.252887 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v4wjc"/"openshift-service-ca.crt" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.256501 4620 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v4wjc"/"kube-root-ca.crt" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.268286 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v4wjc/must-gather-hjkxw"] Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.417244 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcfvj\" (UniqueName: \"kubernetes.io/projected/94b0bb05-6753-44be-ad8b-43c35dfbc043-kube-api-access-pcfvj\") pod \"must-gather-hjkxw\" (UID: \"94b0bb05-6753-44be-ad8b-43c35dfbc043\") " pod="openshift-must-gather-v4wjc/must-gather-hjkxw" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.417376 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/94b0bb05-6753-44be-ad8b-43c35dfbc043-must-gather-output\") pod \"must-gather-hjkxw\" (UID: \"94b0bb05-6753-44be-ad8b-43c35dfbc043\") " pod="openshift-must-gather-v4wjc/must-gather-hjkxw" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.519022 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcfvj\" (UniqueName: \"kubernetes.io/projected/94b0bb05-6753-44be-ad8b-43c35dfbc043-kube-api-access-pcfvj\") pod \"must-gather-hjkxw\" (UID: \"94b0bb05-6753-44be-ad8b-43c35dfbc043\") " pod="openshift-must-gather-v4wjc/must-gather-hjkxw" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.519106 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/94b0bb05-6753-44be-ad8b-43c35dfbc043-must-gather-output\") pod \"must-gather-hjkxw\" (UID: \"94b0bb05-6753-44be-ad8b-43c35dfbc043\") " pod="openshift-must-gather-v4wjc/must-gather-hjkxw" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.519498 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/94b0bb05-6753-44be-ad8b-43c35dfbc043-must-gather-output\") pod \"must-gather-hjkxw\" (UID: \"94b0bb05-6753-44be-ad8b-43c35dfbc043\") " pod="openshift-must-gather-v4wjc/must-gather-hjkxw" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.553545 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcfvj\" (UniqueName: \"kubernetes.io/projected/94b0bb05-6753-44be-ad8b-43c35dfbc043-kube-api-access-pcfvj\") pod \"must-gather-hjkxw\" (UID: \"94b0bb05-6753-44be-ad8b-43c35dfbc043\") " pod="openshift-must-gather-v4wjc/must-gather-hjkxw" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.566554 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4wjc/must-gather-hjkxw" Jan 29 15:50:29 crc kubenswrapper[4620]: I0129 15:50:29.872392 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:50:29 crc kubenswrapper[4620]: E0129 15:50:29.873136 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:50:30 crc kubenswrapper[4620]: I0129 15:50:30.050191 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v4wjc/must-gather-hjkxw"] Jan 29 15:50:30 crc kubenswrapper[4620]: I0129 15:50:30.733282 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4wjc/must-gather-hjkxw" event={"ID":"94b0bb05-6753-44be-ad8b-43c35dfbc043","Type":"ContainerStarted","Data":"ba874f9e136eaf164ece01b5ffe282ff3e09f4eb7ba8bc80b067abc7016e9f77"} Jan 29 15:50:39 crc kubenswrapper[4620]: I0129 15:50:39.834938 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4wjc/must-gather-hjkxw" event={"ID":"94b0bb05-6753-44be-ad8b-43c35dfbc043","Type":"ContainerStarted","Data":"7cbf522964ef852625870e6fdf39d86045d2ab4414bb18aa2e0e5abad42b5474"} Jan 29 15:50:39 crc kubenswrapper[4620]: I0129 15:50:39.835567 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4wjc/must-gather-hjkxw" event={"ID":"94b0bb05-6753-44be-ad8b-43c35dfbc043","Type":"ContainerStarted","Data":"54ae8e66d3242388cfbf2be7b187e9b4bd5963fb2a8bd203ebde3ef2700c1660"} Jan 29 15:50:39 crc kubenswrapper[4620]: I0129 15:50:39.854748 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v4wjc/must-gather-hjkxw" podStartSLOduration=1.7979991370000001 podStartE2EDuration="10.854729929s" podCreationTimestamp="2026-01-29 15:50:29 +0000 UTC" firstStartedPulling="2026-01-29 15:50:30.045942549 +0000 UTC m=+2970.658770214" lastFinishedPulling="2026-01-29 15:50:39.102673361 +0000 UTC m=+2979.715501006" observedRunningTime="2026-01-29 15:50:39.848823974 +0000 UTC m=+2980.461651629" watchObservedRunningTime="2026-01-29 15:50:39.854729929 +0000 UTC m=+2980.467557574" Jan 29 15:50:40 crc kubenswrapper[4620]: I0129 15:50:40.878575 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:50:40 crc kubenswrapper[4620]: E0129 15:50:40.879555 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:50:52 crc kubenswrapper[4620]: I0129 15:50:52.873181 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:50:52 crc kubenswrapper[4620]: E0129 15:50:52.873889 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:51:03 crc kubenswrapper[4620]: I0129 15:51:03.872256 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:51:03 crc kubenswrapper[4620]: E0129 15:51:03.872986 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:51:14 crc kubenswrapper[4620]: I0129 15:51:14.872709 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:51:14 crc kubenswrapper[4620]: E0129 15:51:14.873407 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:51:29 crc kubenswrapper[4620]: I0129 15:51:29.873030 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:51:29 crc kubenswrapper[4620]: E0129 15:51:29.873775 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:51:38 crc kubenswrapper[4620]: I0129 15:51:38.251267 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv_c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f/util/0.log" Jan 29 15:51:38 crc kubenswrapper[4620]: I0129 15:51:38.447440 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv_c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f/util/0.log" Jan 29 15:51:38 crc kubenswrapper[4620]: I0129 15:51:38.471145 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv_c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f/pull/0.log" Jan 29 15:51:38 crc kubenswrapper[4620]: I0129 15:51:38.488685 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv_c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f/pull/0.log" Jan 29 15:51:38 crc kubenswrapper[4620]: I0129 15:51:38.653874 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv_c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f/extract/0.log" Jan 29 15:51:38 crc kubenswrapper[4620]: I0129 15:51:38.732390 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv_c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f/util/0.log" Jan 29 15:51:38 crc kubenswrapper[4620]: I0129 15:51:38.796169 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8064d7372ce9772d8867d05cebfa2e133e5fec69aa33277c08e70bc12dbrgfv_c6235846-9e2e-4fd7-8e7d-d4eb2b56cb3f/pull/0.log" Jan 29 15:51:38 crc kubenswrapper[4620]: I0129 15:51:38.952869 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-858d89fd-mjknx_cf77b77d-178f-45cc-854f-5ae0438eac47/manager/0.log" Jan 29 15:51:38 crc kubenswrapper[4620]: I0129 15:51:38.955994 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d6fdb96dc-5r2sp_eba5d75b-67b6-45b6-99a6-508fbbcd6fbc/manager/0.log" Jan 29 15:51:39 crc kubenswrapper[4620]: I0129 15:51:39.077652 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-dd77988f8-vpfmk_66c3f803-5b87-4ac9-9673-55cfa299abda/manager/0.log" Jan 29 15:51:39 crc kubenswrapper[4620]: I0129 15:51:39.135037 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-f8c4db9df-wdbkp_0839a549-c728-40e1-bf59-d8eb6cefc3f2/manager/0.log" Jan 29 15:51:39 crc kubenswrapper[4620]: I0129 15:51:39.248559 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-d8b84fbc-mhkm4_c0a5289e-e06f-456b-9dd9-ab08f6f3e2f7/manager/0.log" Jan 29 15:51:39 crc kubenswrapper[4620]: I0129 15:51:39.324577 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-gl2f9_18e5dfa7-ffa3-4a45-b4bd-9b3e8f5e0f21/manager/0.log" Jan 29 15:51:39 crc kubenswrapper[4620]: I0129 15:51:39.446596 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-zg6ls_6ea8203c-2846-46ef-be3b-49596b6edc45/manager/0.log" Jan 29 15:51:39 crc kubenswrapper[4620]: I0129 15:51:39.546938 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-866c9d5b98-nvhfw_eb53cd1f-b0ca-41b1-a611-b71cc5ba71d5/manager/0.log" Jan 29 15:51:39 crc kubenswrapper[4620]: I0129 15:51:39.724889 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7f9d69db65-fpcjf_379029bd-b764-49a9-b9d0-cdf5d69e2276/manager/0.log" Jan 29 15:51:39 crc kubenswrapper[4620]: I0129 15:51:39.769232 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-76c896469f-zjnc9_04f845cd-06d1-47cf-975c-5ad809f0734a/manager/0.log" Jan 29 15:51:39 crc kubenswrapper[4620]: I0129 15:51:39.901921 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-2ktvb_b5ef0cec-fba0-46b1-8410-cb3fd8551106/manager/0.log" Jan 29 15:51:39 crc kubenswrapper[4620]: I0129 15:51:39.975294 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c7cc6ff45-jx2jt_fcbc9624-faaa-4663-8392-7684b49a3d93/manager/0.log" Jan 29 15:51:40 crc kubenswrapper[4620]: I0129 15:51:40.135844 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-68cb478976-m69zs_521acf13-0266-4f13-9744-0f789f922b31/manager/0.log" Jan 29 15:51:40 crc kubenswrapper[4620]: I0129 15:51:40.165278 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68f8cb846c-lwdxt_1fa9c6ba-6bfe-4c67-b27d-6697d9bc540d/manager/0.log" Jan 29 15:51:40 crc kubenswrapper[4620]: I0129 15:51:40.323523 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dxkf4h_c3484963-7f9a-47b3-b59d-9390033689b6/manager/0.log" Jan 29 15:51:40 crc kubenswrapper[4620]: I0129 15:51:40.556380 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65db95fbc9-lzmhp_e0487d66-d615-4562-b241-9d1424693de8/manager/0.log" Jan 29 15:51:40 crc kubenswrapper[4620]: I0129 15:51:40.598202 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-66776849dc-cjcst_8c15503a-bec6-4b41-919e-067dd067232b/operator/0.log" Jan 29 15:51:40 crc kubenswrapper[4620]: I0129 15:51:40.761147 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7dssf_7466ed16-1885-44d8-b340-7e07fb51e497/registry-server/0.log" Jan 29 15:51:40 crc kubenswrapper[4620]: I0129 15:51:40.804015 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-rk4qn_ba4a4cf7-578c-4426-b585-c51a610117dc/manager/0.log" Jan 29 15:51:40 crc kubenswrapper[4620]: I0129 15:51:40.997847 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-46gn6_7cf3c71a-0b56-4367-832f-71713e2684c9/operator/0.log" Jan 29 15:51:41 crc kubenswrapper[4620]: I0129 15:51:41.009630 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-7gtnm_209e5c93-4fcb-45d3-8a50-25bfc9f954bd/manager/0.log" Jan 29 15:51:41 crc kubenswrapper[4620]: I0129 15:51:41.241549 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6cf8c44c7-999b7_6c2400dd-889c-46eb-8d6a-d69e1859135d/manager/0.log" Jan 29 15:51:41 crc kubenswrapper[4620]: I0129 15:51:41.247322 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6f7455757b-vx5xt_e50323a3-871d-41a8-b8b7-488d1fd62e6b/manager/0.log" Jan 29 15:51:41 crc kubenswrapper[4620]: I0129 15:51:41.468060 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-59f4c7d7c4-f8vlq_e2802504-ea02-427f-ab78-79e02a882726/manager/0.log" Jan 29 15:51:41 crc kubenswrapper[4620]: I0129 15:51:41.508776 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-6xsvq_e788f64b-fae0-41d0-931c-a707ff7b2221/manager/0.log" Jan 29 15:51:42 crc kubenswrapper[4620]: I0129 15:51:42.873247 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:51:42 crc kubenswrapper[4620]: E0129 15:51:42.873558 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:51:57 crc kubenswrapper[4620]: I0129 15:51:57.873203 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:51:57 crc kubenswrapper[4620]: E0129 15:51:57.874154 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:52:01 crc kubenswrapper[4620]: I0129 15:52:01.846492 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-27rrp_e1ad943a-d557-4727-b4e9-a863aae1a47d/control-plane-machine-set-operator/0.log" Jan 29 15:52:02 crc kubenswrapper[4620]: I0129 15:52:02.051857 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-swx9b_02381b1b-463c-4532-a690-deee86ffc674/kube-rbac-proxy/0.log" Jan 29 15:52:02 crc kubenswrapper[4620]: I0129 15:52:02.153190 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-swx9b_02381b1b-463c-4532-a690-deee86ffc674/machine-api-operator/0.log" Jan 29 15:52:10 crc kubenswrapper[4620]: I0129 15:52:10.878626 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:52:10 crc kubenswrapper[4620]: E0129 15:52:10.879410 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:52:13 crc kubenswrapper[4620]: I0129 15:52:13.445739 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-cvcw9_df87fa1f-48eb-453e-8f45-d9cb15f59c04/cert-manager-controller/0.log" Jan 29 15:52:13 crc kubenswrapper[4620]: I0129 15:52:13.609608 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-c7xrn_14260386-a556-4e0b-9915-ddca9755fd9c/cert-manager-cainjector/0.log" Jan 29 15:52:13 crc kubenswrapper[4620]: I0129 15:52:13.744689 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-cnndp_28179b6f-f84a-4a18-97a0-5a2f6f06423e/cert-manager-webhook/0.log" Jan 29 15:52:23 crc kubenswrapper[4620]: I0129 15:52:23.873103 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:52:23 crc kubenswrapper[4620]: E0129 15:52:23.873833 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:52:25 crc kubenswrapper[4620]: I0129 15:52:25.465824 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-8c7r2_316915e8-4161-4c98-927b-0434cbd2df0b/nmstate-console-plugin/0.log" Jan 29 15:52:25 crc kubenswrapper[4620]: I0129 15:52:25.605462 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-fbf99_f7f18598-e66f-49aa-855e-54ba8e4abced/nmstate-handler/0.log" Jan 29 15:52:25 crc kubenswrapper[4620]: I0129 15:52:25.662023 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-qc5bc_ff1f7291-09f7-47b1-b883-6d473cfd46e2/kube-rbac-proxy/0.log" Jan 29 15:52:25 crc kubenswrapper[4620]: I0129 15:52:25.728079 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-qc5bc_ff1f7291-09f7-47b1-b883-6d473cfd46e2/nmstate-metrics/0.log" Jan 29 15:52:25 crc kubenswrapper[4620]: I0129 15:52:25.889552 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-7gw7j_e43505da-5a95-4f4d-8b44-b67ccca2ef23/nmstate-operator/0.log" Jan 29 15:52:25 crc kubenswrapper[4620]: I0129 15:52:25.926417 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-4nx94_2fe4a43f-5be7-4f39-9698-53a404bc411e/nmstate-webhook/0.log" Jan 29 15:52:37 crc kubenswrapper[4620]: I0129 15:52:37.872946 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:52:37 crc kubenswrapper[4620]: E0129 15:52:37.873634 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:52:49 crc kubenswrapper[4620]: I0129 15:52:49.872856 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:52:49 crc kubenswrapper[4620]: E0129 15:52:49.874242 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:52:50 crc kubenswrapper[4620]: I0129 15:52:50.683499 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-jvtb9_3b3b9527-5117-4a99-9859-dc287cefd3bf/kube-rbac-proxy/0.log" Jan 29 15:52:50 crc kubenswrapper[4620]: I0129 15:52:50.807483 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-jvtb9_3b3b9527-5117-4a99-9859-dc287cefd3bf/controller/0.log" Jan 29 15:52:50 crc kubenswrapper[4620]: I0129 15:52:50.941379 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/cp-frr-files/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.136140 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/cp-frr-files/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.173657 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/cp-metrics/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.178365 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/cp-reloader/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.236670 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/cp-reloader/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.366432 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/cp-frr-files/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.392500 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/cp-metrics/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.413474 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/cp-reloader/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.466511 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/cp-metrics/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.675449 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/cp-reloader/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.696428 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/cp-metrics/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.697268 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/cp-frr-files/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.739299 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/controller/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.892115 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/kube-rbac-proxy/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.933236 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/frr-metrics/0.log" Jan 29 15:52:51 crc kubenswrapper[4620]: I0129 15:52:51.957740 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/frr/0.log" Jan 29 15:52:52 crc kubenswrapper[4620]: I0129 15:52:52.011719 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/kube-rbac-proxy-frr/0.log" Jan 29 15:52:52 crc kubenswrapper[4620]: I0129 15:52:52.165666 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-h6cmz_e6981bff-0689-47b1-96f6-f265340449f9/reloader/0.log" Jan 29 15:52:52 crc kubenswrapper[4620]: I0129 15:52:52.224386 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-klnxt_fd14327c-f27e-4e0b-9955-a9cc76e1c253/frr-k8s-webhook-server/0.log" Jan 29 15:52:52 crc kubenswrapper[4620]: I0129 15:52:52.361290 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-88f9f7c47-mcqwq_ba5af97e-3cd0-4492-8ecf-50a050ab7651/manager/0.log" Jan 29 15:52:52 crc kubenswrapper[4620]: I0129 15:52:52.399543 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-777c554c66-rxv9p_47675cc2-2da6-48dd-8d00-bf9de5b826d9/webhook-server/0.log" Jan 29 15:52:52 crc kubenswrapper[4620]: I0129 15:52:52.533237 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8n22g_ed16c687-3d63-4256-8715-b5654d4760c9/kube-rbac-proxy/0.log" Jan 29 15:52:52 crc kubenswrapper[4620]: I0129 15:52:52.714484 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8n22g_ed16c687-3d63-4256-8715-b5654d4760c9/speaker/0.log" Jan 29 15:53:04 crc kubenswrapper[4620]: I0129 15:53:04.872397 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:53:04 crc kubenswrapper[4620]: E0129 15:53:04.873211 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:53:05 crc kubenswrapper[4620]: I0129 15:53:05.016511 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx_c2b74701-552f-4983-9361-3889af3c6c25/util/0.log" Jan 29 15:53:05 crc kubenswrapper[4620]: I0129 15:53:05.181533 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx_c2b74701-552f-4983-9361-3889af3c6c25/util/0.log" Jan 29 15:53:05 crc kubenswrapper[4620]: I0129 15:53:05.183813 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx_c2b74701-552f-4983-9361-3889af3c6c25/pull/0.log" Jan 29 15:53:05 crc kubenswrapper[4620]: I0129 15:53:05.232059 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx_c2b74701-552f-4983-9361-3889af3c6c25/pull/0.log" Jan 29 15:53:05 crc kubenswrapper[4620]: I0129 15:53:05.444580 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx_c2b74701-552f-4983-9361-3889af3c6c25/util/0.log" Jan 29 15:53:05 crc kubenswrapper[4620]: I0129 15:53:05.450829 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx_c2b74701-552f-4983-9361-3889af3c6c25/extract/0.log" Jan 29 15:53:05 crc kubenswrapper[4620]: I0129 15:53:05.488001 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc62txx_c2b74701-552f-4983-9361-3889af3c6c25/pull/0.log" Jan 29 15:53:05 crc kubenswrapper[4620]: I0129 15:53:05.706899 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh_661ca67d-1c4e-4f80-a931-cb9989be0907/util/0.log" Jan 29 15:53:05 crc kubenswrapper[4620]: I0129 15:53:05.848788 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh_661ca67d-1c4e-4f80-a931-cb9989be0907/pull/0.log" Jan 29 15:53:05 crc kubenswrapper[4620]: I0129 15:53:05.883962 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh_661ca67d-1c4e-4f80-a931-cb9989be0907/pull/0.log" Jan 29 15:53:05 crc kubenswrapper[4620]: I0129 15:53:05.899619 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh_661ca67d-1c4e-4f80-a931-cb9989be0907/util/0.log" Jan 29 15:53:06 crc kubenswrapper[4620]: I0129 15:53:06.100433 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh_661ca67d-1c4e-4f80-a931-cb9989be0907/util/0.log" Jan 29 15:53:06 crc kubenswrapper[4620]: I0129 15:53:06.126657 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh_661ca67d-1c4e-4f80-a931-cb9989be0907/extract/0.log" Jan 29 15:53:06 crc kubenswrapper[4620]: I0129 15:53:06.134240 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713tjtbh_661ca67d-1c4e-4f80-a931-cb9989be0907/pull/0.log" Jan 29 15:53:06 crc kubenswrapper[4620]: I0129 15:53:06.285143 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rdbgv_0dcb80a9-ab19-465f-a6e3-0b287574f166/extract-utilities/0.log" Jan 29 15:53:06 crc kubenswrapper[4620]: I0129 15:53:06.503903 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rdbgv_0dcb80a9-ab19-465f-a6e3-0b287574f166/extract-utilities/0.log" Jan 29 15:53:06 crc kubenswrapper[4620]: I0129 15:53:06.529067 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rdbgv_0dcb80a9-ab19-465f-a6e3-0b287574f166/extract-content/0.log" Jan 29 15:53:06 crc kubenswrapper[4620]: I0129 15:53:06.530069 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rdbgv_0dcb80a9-ab19-465f-a6e3-0b287574f166/extract-content/0.log" Jan 29 15:53:06 crc kubenswrapper[4620]: I0129 15:53:06.684452 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rdbgv_0dcb80a9-ab19-465f-a6e3-0b287574f166/extract-utilities/0.log" Jan 29 15:53:06 crc kubenswrapper[4620]: I0129 15:53:06.758643 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rdbgv_0dcb80a9-ab19-465f-a6e3-0b287574f166/extract-content/0.log" Jan 29 15:53:07 crc kubenswrapper[4620]: I0129 15:53:07.049001 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rdbgv_0dcb80a9-ab19-465f-a6e3-0b287574f166/registry-server/0.log" Jan 29 15:53:07 crc kubenswrapper[4620]: I0129 15:53:07.058501 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjnp4_4b558081-fc79-4ae4-b2e5-b9ea0da28279/extract-utilities/0.log" Jan 29 15:53:07 crc kubenswrapper[4620]: I0129 15:53:07.124117 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjnp4_4b558081-fc79-4ae4-b2e5-b9ea0da28279/extract-utilities/0.log" Jan 29 15:53:07 crc kubenswrapper[4620]: I0129 15:53:07.188960 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjnp4_4b558081-fc79-4ae4-b2e5-b9ea0da28279/extract-content/0.log" Jan 29 15:53:07 crc kubenswrapper[4620]: I0129 15:53:07.268890 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjnp4_4b558081-fc79-4ae4-b2e5-b9ea0da28279/extract-content/0.log" Jan 29 15:53:07 crc kubenswrapper[4620]: I0129 15:53:07.420774 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjnp4_4b558081-fc79-4ae4-b2e5-b9ea0da28279/extract-utilities/0.log" Jan 29 15:53:07 crc kubenswrapper[4620]: I0129 15:53:07.430343 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjnp4_4b558081-fc79-4ae4-b2e5-b9ea0da28279/extract-content/0.log" Jan 29 15:53:07 crc kubenswrapper[4620]: I0129 15:53:07.640219 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f8tnf_c44039e8-f318-4ec2-bd3c-587834aa4bb8/marketplace-operator/3.log" Jan 29 15:53:07 crc kubenswrapper[4620]: I0129 15:53:07.730034 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f8tnf_c44039e8-f318-4ec2-bd3c-587834aa4bb8/marketplace-operator/2.log" Jan 29 15:53:07 crc kubenswrapper[4620]: I0129 15:53:07.846694 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gjnp4_4b558081-fc79-4ae4-b2e5-b9ea0da28279/registry-server/0.log" Jan 29 15:53:07 crc kubenswrapper[4620]: I0129 15:53:07.948189 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-89hq7_1f6d8d26-432a-439a-b3d5-a122230a094f/extract-utilities/0.log" Jan 29 15:53:08 crc kubenswrapper[4620]: I0129 15:53:08.064127 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-89hq7_1f6d8d26-432a-439a-b3d5-a122230a094f/extract-utilities/0.log" Jan 29 15:53:08 crc kubenswrapper[4620]: I0129 15:53:08.079069 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-89hq7_1f6d8d26-432a-439a-b3d5-a122230a094f/extract-content/0.log" Jan 29 15:53:08 crc kubenswrapper[4620]: I0129 15:53:08.154013 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-89hq7_1f6d8d26-432a-439a-b3d5-a122230a094f/extract-content/0.log" Jan 29 15:53:08 crc kubenswrapper[4620]: I0129 15:53:08.292495 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-89hq7_1f6d8d26-432a-439a-b3d5-a122230a094f/extract-content/0.log" Jan 29 15:53:08 crc kubenswrapper[4620]: I0129 15:53:08.294868 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-89hq7_1f6d8d26-432a-439a-b3d5-a122230a094f/extract-utilities/0.log" Jan 29 15:53:08 crc kubenswrapper[4620]: I0129 15:53:08.392198 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-89hq7_1f6d8d26-432a-439a-b3d5-a122230a094f/registry-server/0.log" Jan 29 15:53:08 crc kubenswrapper[4620]: I0129 15:53:08.557953 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j8dxg_2d0c7733-6f6a-4468-9065-7ca4df3cdc68/extract-utilities/0.log" Jan 29 15:53:08 crc kubenswrapper[4620]: I0129 15:53:08.765692 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j8dxg_2d0c7733-6f6a-4468-9065-7ca4df3cdc68/extract-utilities/0.log" Jan 29 15:53:08 crc kubenswrapper[4620]: I0129 15:53:08.773987 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j8dxg_2d0c7733-6f6a-4468-9065-7ca4df3cdc68/extract-content/0.log" Jan 29 15:53:08 crc kubenswrapper[4620]: I0129 15:53:08.828626 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j8dxg_2d0c7733-6f6a-4468-9065-7ca4df3cdc68/extract-content/0.log" Jan 29 15:53:09 crc kubenswrapper[4620]: I0129 15:53:09.133375 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j8dxg_2d0c7733-6f6a-4468-9065-7ca4df3cdc68/extract-utilities/0.log" Jan 29 15:53:09 crc kubenswrapper[4620]: I0129 15:53:09.178149 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j8dxg_2d0c7733-6f6a-4468-9065-7ca4df3cdc68/extract-content/0.log" Jan 29 15:53:09 crc kubenswrapper[4620]: I0129 15:53:09.423267 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j8dxg_2d0c7733-6f6a-4468-9065-7ca4df3cdc68/registry-server/0.log" Jan 29 15:53:19 crc kubenswrapper[4620]: I0129 15:53:19.872917 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:53:19 crc kubenswrapper[4620]: E0129 15:53:19.873499 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:53:20 crc kubenswrapper[4620]: I0129 15:53:20.310359 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sjfqm"] Jan 29 15:53:20 crc kubenswrapper[4620]: I0129 15:53:20.312200 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:20 crc kubenswrapper[4620]: I0129 15:53:20.323777 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sjfqm"] Jan 29 15:53:20 crc kubenswrapper[4620]: I0129 15:53:20.482348 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5879f18-6a0c-400f-923d-53908dc85661-catalog-content\") pod \"community-operators-sjfqm\" (UID: \"f5879f18-6a0c-400f-923d-53908dc85661\") " pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:20 crc kubenswrapper[4620]: I0129 15:53:20.482456 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdwwn\" (UniqueName: \"kubernetes.io/projected/f5879f18-6a0c-400f-923d-53908dc85661-kube-api-access-hdwwn\") pod \"community-operators-sjfqm\" (UID: \"f5879f18-6a0c-400f-923d-53908dc85661\") " pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:20 crc kubenswrapper[4620]: I0129 15:53:20.482556 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5879f18-6a0c-400f-923d-53908dc85661-utilities\") pod \"community-operators-sjfqm\" (UID: \"f5879f18-6a0c-400f-923d-53908dc85661\") " pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:20 crc kubenswrapper[4620]: I0129 15:53:20.583474 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5879f18-6a0c-400f-923d-53908dc85661-utilities\") pod \"community-operators-sjfqm\" (UID: \"f5879f18-6a0c-400f-923d-53908dc85661\") " pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:20 crc kubenswrapper[4620]: I0129 15:53:20.583961 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5879f18-6a0c-400f-923d-53908dc85661-utilities\") pod \"community-operators-sjfqm\" (UID: \"f5879f18-6a0c-400f-923d-53908dc85661\") " pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:20 crc kubenswrapper[4620]: I0129 15:53:20.584053 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5879f18-6a0c-400f-923d-53908dc85661-catalog-content\") pod \"community-operators-sjfqm\" (UID: \"f5879f18-6a0c-400f-923d-53908dc85661\") " pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:20 crc kubenswrapper[4620]: I0129 15:53:20.584090 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdwwn\" (UniqueName: \"kubernetes.io/projected/f5879f18-6a0c-400f-923d-53908dc85661-kube-api-access-hdwwn\") pod \"community-operators-sjfqm\" (UID: \"f5879f18-6a0c-400f-923d-53908dc85661\") " pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:20 crc kubenswrapper[4620]: I0129 15:53:20.584573 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5879f18-6a0c-400f-923d-53908dc85661-catalog-content\") pod \"community-operators-sjfqm\" (UID: \"f5879f18-6a0c-400f-923d-53908dc85661\") " pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:20 crc kubenswrapper[4620]: I0129 15:53:20.612924 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdwwn\" (UniqueName: \"kubernetes.io/projected/f5879f18-6a0c-400f-923d-53908dc85661-kube-api-access-hdwwn\") pod \"community-operators-sjfqm\" (UID: \"f5879f18-6a0c-400f-923d-53908dc85661\") " pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:20 crc kubenswrapper[4620]: I0129 15:53:20.631284 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:21 crc kubenswrapper[4620]: I0129 15:53:21.321991 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sjfqm"] Jan 29 15:53:21 crc kubenswrapper[4620]: I0129 15:53:21.924186 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjfqm" event={"ID":"f5879f18-6a0c-400f-923d-53908dc85661","Type":"ContainerStarted","Data":"a7804aa874c3c8c16b18661045d0d39ee5cd2e0c2c0d2b6449ccbda58dd233a4"} Jan 29 15:53:22 crc kubenswrapper[4620]: I0129 15:53:22.933254 4620 generic.go:334] "Generic (PLEG): container finished" podID="f5879f18-6a0c-400f-923d-53908dc85661" containerID="6e03d607b5e4fd24c021ae6076c6cb9041b979c00a9b19f2809f438119850f08" exitCode=0 Jan 29 15:53:22 crc kubenswrapper[4620]: I0129 15:53:22.933373 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjfqm" event={"ID":"f5879f18-6a0c-400f-923d-53908dc85661","Type":"ContainerDied","Data":"6e03d607b5e4fd24c021ae6076c6cb9041b979c00a9b19f2809f438119850f08"} Jan 29 15:53:22 crc kubenswrapper[4620]: I0129 15:53:22.935074 4620 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:53:26 crc kubenswrapper[4620]: I0129 15:53:26.960655 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjfqm" event={"ID":"f5879f18-6a0c-400f-923d-53908dc85661","Type":"ContainerStarted","Data":"96b475247760c4724398a93400e9173d74e3d9125bdf7e494ce50d9aef8b245f"} Jan 29 15:53:28 crc kubenswrapper[4620]: I0129 15:53:28.984613 4620 generic.go:334] "Generic (PLEG): container finished" podID="f5879f18-6a0c-400f-923d-53908dc85661" containerID="96b475247760c4724398a93400e9173d74e3d9125bdf7e494ce50d9aef8b245f" exitCode=0 Jan 29 15:53:28 crc kubenswrapper[4620]: I0129 15:53:28.984694 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjfqm" event={"ID":"f5879f18-6a0c-400f-923d-53908dc85661","Type":"ContainerDied","Data":"96b475247760c4724398a93400e9173d74e3d9125bdf7e494ce50d9aef8b245f"} Jan 29 15:53:29 crc kubenswrapper[4620]: I0129 15:53:29.994738 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjfqm" event={"ID":"f5879f18-6a0c-400f-923d-53908dc85661","Type":"ContainerStarted","Data":"ca218f704fd60e71969c885e151e791ebde0fda75578418703bef9f707db0af8"} Jan 29 15:53:30 crc kubenswrapper[4620]: I0129 15:53:30.011624 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sjfqm" podStartSLOduration=3.531085278 podStartE2EDuration="10.011605636s" podCreationTimestamp="2026-01-29 15:53:20 +0000 UTC" firstStartedPulling="2026-01-29 15:53:22.934640512 +0000 UTC m=+3143.547468157" lastFinishedPulling="2026-01-29 15:53:29.41516087 +0000 UTC m=+3150.027988515" observedRunningTime="2026-01-29 15:53:30.011017868 +0000 UTC m=+3150.623845513" watchObservedRunningTime="2026-01-29 15:53:30.011605636 +0000 UTC m=+3150.624433291" Jan 29 15:53:30 crc kubenswrapper[4620]: I0129 15:53:30.632786 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:30 crc kubenswrapper[4620]: I0129 15:53:30.633008 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:31 crc kubenswrapper[4620]: I0129 15:53:31.681160 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sjfqm" podUID="f5879f18-6a0c-400f-923d-53908dc85661" containerName="registry-server" probeResult="failure" output=< Jan 29 15:53:31 crc kubenswrapper[4620]: timeout: failed to connect service ":50051" within 1s Jan 29 15:53:31 crc kubenswrapper[4620]: > Jan 29 15:53:34 crc kubenswrapper[4620]: I0129 15:53:34.872451 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:53:34 crc kubenswrapper[4620]: E0129 15:53:34.872836 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:53:40 crc kubenswrapper[4620]: I0129 15:53:40.682433 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:40 crc kubenswrapper[4620]: I0129 15:53:40.742979 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:40 crc kubenswrapper[4620]: I0129 15:53:40.928074 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sjfqm"] Jan 29 15:53:42 crc kubenswrapper[4620]: I0129 15:53:42.105507 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sjfqm" podUID="f5879f18-6a0c-400f-923d-53908dc85661" containerName="registry-server" containerID="cri-o://ca218f704fd60e71969c885e151e791ebde0fda75578418703bef9f707db0af8" gracePeriod=2 Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.007592 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.106919 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5879f18-6a0c-400f-923d-53908dc85661-utilities\") pod \"f5879f18-6a0c-400f-923d-53908dc85661\" (UID: \"f5879f18-6a0c-400f-923d-53908dc85661\") " Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.106980 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdwwn\" (UniqueName: \"kubernetes.io/projected/f5879f18-6a0c-400f-923d-53908dc85661-kube-api-access-hdwwn\") pod \"f5879f18-6a0c-400f-923d-53908dc85661\" (UID: \"f5879f18-6a0c-400f-923d-53908dc85661\") " Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.107145 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5879f18-6a0c-400f-923d-53908dc85661-catalog-content\") pod \"f5879f18-6a0c-400f-923d-53908dc85661\" (UID: \"f5879f18-6a0c-400f-923d-53908dc85661\") " Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.108034 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5879f18-6a0c-400f-923d-53908dc85661-utilities" (OuterVolumeSpecName: "utilities") pod "f5879f18-6a0c-400f-923d-53908dc85661" (UID: "f5879f18-6a0c-400f-923d-53908dc85661"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.114221 4620 generic.go:334] "Generic (PLEG): container finished" podID="f5879f18-6a0c-400f-923d-53908dc85661" containerID="ca218f704fd60e71969c885e151e791ebde0fda75578418703bef9f707db0af8" exitCode=0 Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.114268 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjfqm" event={"ID":"f5879f18-6a0c-400f-923d-53908dc85661","Type":"ContainerDied","Data":"ca218f704fd60e71969c885e151e791ebde0fda75578418703bef9f707db0af8"} Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.114305 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjfqm" event={"ID":"f5879f18-6a0c-400f-923d-53908dc85661","Type":"ContainerDied","Data":"a7804aa874c3c8c16b18661045d0d39ee5cd2e0c2c0d2b6449ccbda58dd233a4"} Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.114326 4620 scope.go:117] "RemoveContainer" containerID="ca218f704fd60e71969c885e151e791ebde0fda75578418703bef9f707db0af8" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.114389 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjfqm" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.117731 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5879f18-6a0c-400f-923d-53908dc85661-kube-api-access-hdwwn" (OuterVolumeSpecName: "kube-api-access-hdwwn") pod "f5879f18-6a0c-400f-923d-53908dc85661" (UID: "f5879f18-6a0c-400f-923d-53908dc85661"). InnerVolumeSpecName "kube-api-access-hdwwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.156085 4620 scope.go:117] "RemoveContainer" containerID="96b475247760c4724398a93400e9173d74e3d9125bdf7e494ce50d9aef8b245f" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.169154 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5879f18-6a0c-400f-923d-53908dc85661-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5879f18-6a0c-400f-923d-53908dc85661" (UID: "f5879f18-6a0c-400f-923d-53908dc85661"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.173559 4620 scope.go:117] "RemoveContainer" containerID="6e03d607b5e4fd24c021ae6076c6cb9041b979c00a9b19f2809f438119850f08" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.195730 4620 scope.go:117] "RemoveContainer" containerID="ca218f704fd60e71969c885e151e791ebde0fda75578418703bef9f707db0af8" Jan 29 15:53:43 crc kubenswrapper[4620]: E0129 15:53:43.196117 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca218f704fd60e71969c885e151e791ebde0fda75578418703bef9f707db0af8\": container with ID starting with ca218f704fd60e71969c885e151e791ebde0fda75578418703bef9f707db0af8 not found: ID does not exist" containerID="ca218f704fd60e71969c885e151e791ebde0fda75578418703bef9f707db0af8" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.196148 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca218f704fd60e71969c885e151e791ebde0fda75578418703bef9f707db0af8"} err="failed to get container status \"ca218f704fd60e71969c885e151e791ebde0fda75578418703bef9f707db0af8\": rpc error: code = NotFound desc = could not find container \"ca218f704fd60e71969c885e151e791ebde0fda75578418703bef9f707db0af8\": container with ID starting with ca218f704fd60e71969c885e151e791ebde0fda75578418703bef9f707db0af8 not found: ID does not exist" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.196170 4620 scope.go:117] "RemoveContainer" containerID="96b475247760c4724398a93400e9173d74e3d9125bdf7e494ce50d9aef8b245f" Jan 29 15:53:43 crc kubenswrapper[4620]: E0129 15:53:43.196366 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96b475247760c4724398a93400e9173d74e3d9125bdf7e494ce50d9aef8b245f\": container with ID starting with 96b475247760c4724398a93400e9173d74e3d9125bdf7e494ce50d9aef8b245f not found: ID does not exist" containerID="96b475247760c4724398a93400e9173d74e3d9125bdf7e494ce50d9aef8b245f" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.196388 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96b475247760c4724398a93400e9173d74e3d9125bdf7e494ce50d9aef8b245f"} err="failed to get container status \"96b475247760c4724398a93400e9173d74e3d9125bdf7e494ce50d9aef8b245f\": rpc error: code = NotFound desc = could not find container \"96b475247760c4724398a93400e9173d74e3d9125bdf7e494ce50d9aef8b245f\": container with ID starting with 96b475247760c4724398a93400e9173d74e3d9125bdf7e494ce50d9aef8b245f not found: ID does not exist" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.196402 4620 scope.go:117] "RemoveContainer" containerID="6e03d607b5e4fd24c021ae6076c6cb9041b979c00a9b19f2809f438119850f08" Jan 29 15:53:43 crc kubenswrapper[4620]: E0129 15:53:43.196917 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e03d607b5e4fd24c021ae6076c6cb9041b979c00a9b19f2809f438119850f08\": container with ID starting with 6e03d607b5e4fd24c021ae6076c6cb9041b979c00a9b19f2809f438119850f08 not found: ID does not exist" containerID="6e03d607b5e4fd24c021ae6076c6cb9041b979c00a9b19f2809f438119850f08" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.196946 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e03d607b5e4fd24c021ae6076c6cb9041b979c00a9b19f2809f438119850f08"} err="failed to get container status \"6e03d607b5e4fd24c021ae6076c6cb9041b979c00a9b19f2809f438119850f08\": rpc error: code = NotFound desc = could not find container \"6e03d607b5e4fd24c021ae6076c6cb9041b979c00a9b19f2809f438119850f08\": container with ID starting with 6e03d607b5e4fd24c021ae6076c6cb9041b979c00a9b19f2809f438119850f08 not found: ID does not exist" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.208893 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5879f18-6a0c-400f-923d-53908dc85661-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.208933 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5879f18-6a0c-400f-923d-53908dc85661-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.208942 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdwwn\" (UniqueName: \"kubernetes.io/projected/f5879f18-6a0c-400f-923d-53908dc85661-kube-api-access-hdwwn\") on node \"crc\" DevicePath \"\"" Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.452797 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sjfqm"] Jan 29 15:53:43 crc kubenswrapper[4620]: I0129 15:53:43.458589 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sjfqm"] Jan 29 15:53:44 crc kubenswrapper[4620]: I0129 15:53:44.883388 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5879f18-6a0c-400f-923d-53908dc85661" path="/var/lib/kubelet/pods/f5879f18-6a0c-400f-923d-53908dc85661/volumes" Jan 29 15:53:45 crc kubenswrapper[4620]: I0129 15:53:45.873383 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:53:45 crc kubenswrapper[4620]: E0129 15:53:45.874158 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:53:58 crc kubenswrapper[4620]: I0129 15:53:58.878615 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:53:58 crc kubenswrapper[4620]: E0129 15:53:58.879438 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:54:10 crc kubenswrapper[4620]: I0129 15:54:10.879399 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:54:10 crc kubenswrapper[4620]: E0129 15:54:10.880414 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:54:18 crc kubenswrapper[4620]: I0129 15:54:18.385983 4620 generic.go:334] "Generic (PLEG): container finished" podID="94b0bb05-6753-44be-ad8b-43c35dfbc043" containerID="54ae8e66d3242388cfbf2be7b187e9b4bd5963fb2a8bd203ebde3ef2700c1660" exitCode=0 Jan 29 15:54:18 crc kubenswrapper[4620]: I0129 15:54:18.386071 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4wjc/must-gather-hjkxw" event={"ID":"94b0bb05-6753-44be-ad8b-43c35dfbc043","Type":"ContainerDied","Data":"54ae8e66d3242388cfbf2be7b187e9b4bd5963fb2a8bd203ebde3ef2700c1660"} Jan 29 15:54:18 crc kubenswrapper[4620]: I0129 15:54:18.387339 4620 scope.go:117] "RemoveContainer" containerID="54ae8e66d3242388cfbf2be7b187e9b4bd5963fb2a8bd203ebde3ef2700c1660" Jan 29 15:54:19 crc kubenswrapper[4620]: I0129 15:54:19.005557 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4wjc_must-gather-hjkxw_94b0bb05-6753-44be-ad8b-43c35dfbc043/gather/0.log" Jan 29 15:54:22 crc kubenswrapper[4620]: I0129 15:54:22.873527 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:54:22 crc kubenswrapper[4620]: E0129 15:54:22.874259 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:54:26 crc kubenswrapper[4620]: I0129 15:54:26.316707 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v4wjc/must-gather-hjkxw"] Jan 29 15:54:26 crc kubenswrapper[4620]: I0129 15:54:26.317251 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v4wjc/must-gather-hjkxw" podUID="94b0bb05-6753-44be-ad8b-43c35dfbc043" containerName="copy" containerID="cri-o://7cbf522964ef852625870e6fdf39d86045d2ab4414bb18aa2e0e5abad42b5474" gracePeriod=2 Jan 29 15:54:26 crc kubenswrapper[4620]: I0129 15:54:26.333705 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v4wjc/must-gather-hjkxw"] Jan 29 15:54:26 crc kubenswrapper[4620]: I0129 15:54:26.453439 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4wjc_must-gather-hjkxw_94b0bb05-6753-44be-ad8b-43c35dfbc043/copy/0.log" Jan 29 15:54:26 crc kubenswrapper[4620]: I0129 15:54:26.454608 4620 generic.go:334] "Generic (PLEG): container finished" podID="94b0bb05-6753-44be-ad8b-43c35dfbc043" containerID="7cbf522964ef852625870e6fdf39d86045d2ab4414bb18aa2e0e5abad42b5474" exitCode=143 Jan 29 15:54:26 crc kubenswrapper[4620]: I0129 15:54:26.701396 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4wjc_must-gather-hjkxw_94b0bb05-6753-44be-ad8b-43c35dfbc043/copy/0.log" Jan 29 15:54:26 crc kubenswrapper[4620]: I0129 15:54:26.701884 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4wjc/must-gather-hjkxw" Jan 29 15:54:26 crc kubenswrapper[4620]: I0129 15:54:26.866786 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcfvj\" (UniqueName: \"kubernetes.io/projected/94b0bb05-6753-44be-ad8b-43c35dfbc043-kube-api-access-pcfvj\") pod \"94b0bb05-6753-44be-ad8b-43c35dfbc043\" (UID: \"94b0bb05-6753-44be-ad8b-43c35dfbc043\") " Jan 29 15:54:26 crc kubenswrapper[4620]: I0129 15:54:26.866952 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/94b0bb05-6753-44be-ad8b-43c35dfbc043-must-gather-output\") pod \"94b0bb05-6753-44be-ad8b-43c35dfbc043\" (UID: \"94b0bb05-6753-44be-ad8b-43c35dfbc043\") " Jan 29 15:54:26 crc kubenswrapper[4620]: I0129 15:54:26.898566 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94b0bb05-6753-44be-ad8b-43c35dfbc043-kube-api-access-pcfvj" (OuterVolumeSpecName: "kube-api-access-pcfvj") pod "94b0bb05-6753-44be-ad8b-43c35dfbc043" (UID: "94b0bb05-6753-44be-ad8b-43c35dfbc043"). InnerVolumeSpecName "kube-api-access-pcfvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:26 crc kubenswrapper[4620]: I0129 15:54:26.958527 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94b0bb05-6753-44be-ad8b-43c35dfbc043-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "94b0bb05-6753-44be-ad8b-43c35dfbc043" (UID: "94b0bb05-6753-44be-ad8b-43c35dfbc043"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:54:26 crc kubenswrapper[4620]: I0129 15:54:26.968347 4620 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/94b0bb05-6753-44be-ad8b-43c35dfbc043-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:26 crc kubenswrapper[4620]: I0129 15:54:26.968391 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcfvj\" (UniqueName: \"kubernetes.io/projected/94b0bb05-6753-44be-ad8b-43c35dfbc043-kube-api-access-pcfvj\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:27 crc kubenswrapper[4620]: I0129 15:54:27.471345 4620 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4wjc_must-gather-hjkxw_94b0bb05-6753-44be-ad8b-43c35dfbc043/copy/0.log" Jan 29 15:54:27 crc kubenswrapper[4620]: I0129 15:54:27.472895 4620 scope.go:117] "RemoveContainer" containerID="7cbf522964ef852625870e6fdf39d86045d2ab4414bb18aa2e0e5abad42b5474" Jan 29 15:54:27 crc kubenswrapper[4620]: I0129 15:54:27.472971 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4wjc/must-gather-hjkxw" Jan 29 15:54:27 crc kubenswrapper[4620]: I0129 15:54:27.497206 4620 scope.go:117] "RemoveContainer" containerID="54ae8e66d3242388cfbf2be7b187e9b4bd5963fb2a8bd203ebde3ef2700c1660" Jan 29 15:54:28 crc kubenswrapper[4620]: I0129 15:54:28.881121 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94b0bb05-6753-44be-ad8b-43c35dfbc043" path="/var/lib/kubelet/pods/94b0bb05-6753-44be-ad8b-43c35dfbc043/volumes" Jan 29 15:54:33 crc kubenswrapper[4620]: I0129 15:54:33.872442 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:54:33 crc kubenswrapper[4620]: E0129 15:54:33.872794 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:54:46 crc kubenswrapper[4620]: I0129 15:54:46.873332 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:54:46 crc kubenswrapper[4620]: E0129 15:54:46.874668 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:55:01 crc kubenswrapper[4620]: I0129 15:55:01.873406 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:55:01 crc kubenswrapper[4620]: E0129 15:55:01.874143 4620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7469t_openshift-machine-config-operator(a76cce43-3d01-4158-b23a-e21fd5927792)\"" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" Jan 29 15:55:12 crc kubenswrapper[4620]: I0129 15:55:12.872447 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:55:13 crc kubenswrapper[4620]: I0129 15:55:13.805268 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"4e4052546046dd4c8fe27bd42898c0eb6be7f95c18a5457f2c808d59d35bcae2"} Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.013182 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4x6pp"] Jan 29 15:55:44 crc kubenswrapper[4620]: E0129 15:55:44.013858 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b0bb05-6753-44be-ad8b-43c35dfbc043" containerName="copy" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.013869 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b0bb05-6753-44be-ad8b-43c35dfbc043" containerName="copy" Jan 29 15:55:44 crc kubenswrapper[4620]: E0129 15:55:44.013883 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5879f18-6a0c-400f-923d-53908dc85661" containerName="extract-utilities" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.013889 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5879f18-6a0c-400f-923d-53908dc85661" containerName="extract-utilities" Jan 29 15:55:44 crc kubenswrapper[4620]: E0129 15:55:44.013899 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5879f18-6a0c-400f-923d-53908dc85661" containerName="registry-server" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.013906 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5879f18-6a0c-400f-923d-53908dc85661" containerName="registry-server" Jan 29 15:55:44 crc kubenswrapper[4620]: E0129 15:55:44.013917 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b0bb05-6753-44be-ad8b-43c35dfbc043" containerName="gather" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.013922 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b0bb05-6753-44be-ad8b-43c35dfbc043" containerName="gather" Jan 29 15:55:44 crc kubenswrapper[4620]: E0129 15:55:44.013939 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5879f18-6a0c-400f-923d-53908dc85661" containerName="extract-content" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.013945 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5879f18-6a0c-400f-923d-53908dc85661" containerName="extract-content" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.014065 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b0bb05-6753-44be-ad8b-43c35dfbc043" containerName="gather" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.014077 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b0bb05-6753-44be-ad8b-43c35dfbc043" containerName="copy" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.014087 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5879f18-6a0c-400f-923d-53908dc85661" containerName="registry-server" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.015043 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.033022 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4x6pp"] Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.156304 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-catalog-content\") pod \"certified-operators-4x6pp\" (UID: \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\") " pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.156593 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-utilities\") pod \"certified-operators-4x6pp\" (UID: \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\") " pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.156625 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75vfg\" (UniqueName: \"kubernetes.io/projected/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-kube-api-access-75vfg\") pod \"certified-operators-4x6pp\" (UID: \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\") " pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.258249 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-catalog-content\") pod \"certified-operators-4x6pp\" (UID: \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\") " pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.258370 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-utilities\") pod \"certified-operators-4x6pp\" (UID: \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\") " pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.258394 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75vfg\" (UniqueName: \"kubernetes.io/projected/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-kube-api-access-75vfg\") pod \"certified-operators-4x6pp\" (UID: \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\") " pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.258988 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-catalog-content\") pod \"certified-operators-4x6pp\" (UID: \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\") " pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.259027 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-utilities\") pod \"certified-operators-4x6pp\" (UID: \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\") " pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.284622 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75vfg\" (UniqueName: \"kubernetes.io/projected/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-kube-api-access-75vfg\") pod \"certified-operators-4x6pp\" (UID: \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\") " pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.337948 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:44 crc kubenswrapper[4620]: I0129 15:55:44.858855 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4x6pp"] Jan 29 15:55:45 crc kubenswrapper[4620]: I0129 15:55:45.038794 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x6pp" event={"ID":"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d","Type":"ContainerStarted","Data":"610e1a4053ee47e63e1a456a6c295b2967458f1ee77b435dc182f02d66e79ac2"} Jan 29 15:55:45 crc kubenswrapper[4620]: I0129 15:55:45.038852 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x6pp" event={"ID":"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d","Type":"ContainerStarted","Data":"7e4328817023af8d917ca36f38ba0dcc2ca1bc89ecdf80ec9c79703c67801185"} Jan 29 15:55:46 crc kubenswrapper[4620]: I0129 15:55:46.046281 4620 generic.go:334] "Generic (PLEG): container finished" podID="a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" containerID="610e1a4053ee47e63e1a456a6c295b2967458f1ee77b435dc182f02d66e79ac2" exitCode=0 Jan 29 15:55:46 crc kubenswrapper[4620]: I0129 15:55:46.046342 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x6pp" event={"ID":"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d","Type":"ContainerDied","Data":"610e1a4053ee47e63e1a456a6c295b2967458f1ee77b435dc182f02d66e79ac2"} Jan 29 15:55:47 crc kubenswrapper[4620]: I0129 15:55:47.059012 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x6pp" event={"ID":"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d","Type":"ContainerStarted","Data":"d1f0bd48cac2dcaf73b18550ad749e8b382b0067e248a9964d9c08d2c5558ea2"} Jan 29 15:55:48 crc kubenswrapper[4620]: I0129 15:55:48.068124 4620 generic.go:334] "Generic (PLEG): container finished" podID="a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" containerID="d1f0bd48cac2dcaf73b18550ad749e8b382b0067e248a9964d9c08d2c5558ea2" exitCode=0 Jan 29 15:55:48 crc kubenswrapper[4620]: I0129 15:55:48.068174 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x6pp" event={"ID":"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d","Type":"ContainerDied","Data":"d1f0bd48cac2dcaf73b18550ad749e8b382b0067e248a9964d9c08d2c5558ea2"} Jan 29 15:55:49 crc kubenswrapper[4620]: I0129 15:55:49.076084 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x6pp" event={"ID":"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d","Type":"ContainerStarted","Data":"651fd09f026d564275e95f2790b56cb5545ad8e8d03f8a1b2e5efab8d1b3c3fc"} Jan 29 15:55:54 crc kubenswrapper[4620]: I0129 15:55:54.338731 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:54 crc kubenswrapper[4620]: I0129 15:55:54.339362 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:54 crc kubenswrapper[4620]: I0129 15:55:54.390658 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:54 crc kubenswrapper[4620]: I0129 15:55:54.417065 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4x6pp" podStartSLOduration=8.984389151 podStartE2EDuration="11.417039008s" podCreationTimestamp="2026-01-29 15:55:43 +0000 UTC" firstStartedPulling="2026-01-29 15:55:46.048523438 +0000 UTC m=+3286.661351083" lastFinishedPulling="2026-01-29 15:55:48.481173285 +0000 UTC m=+3289.094000940" observedRunningTime="2026-01-29 15:55:49.096993768 +0000 UTC m=+3289.709821423" watchObservedRunningTime="2026-01-29 15:55:54.417039008 +0000 UTC m=+3295.029866673" Jan 29 15:55:55 crc kubenswrapper[4620]: I0129 15:55:55.176507 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:55 crc kubenswrapper[4620]: I0129 15:55:55.240386 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4x6pp"] Jan 29 15:55:57 crc kubenswrapper[4620]: I0129 15:55:57.125457 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4x6pp" podUID="a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" containerName="registry-server" containerID="cri-o://651fd09f026d564275e95f2790b56cb5545ad8e8d03f8a1b2e5efab8d1b3c3fc" gracePeriod=2 Jan 29 15:55:57 crc kubenswrapper[4620]: I0129 15:55:57.618212 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:57 crc kubenswrapper[4620]: I0129 15:55:57.779536 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75vfg\" (UniqueName: \"kubernetes.io/projected/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-kube-api-access-75vfg\") pod \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\" (UID: \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\") " Jan 29 15:55:57 crc kubenswrapper[4620]: I0129 15:55:57.779748 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-utilities\") pod \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\" (UID: \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\") " Jan 29 15:55:57 crc kubenswrapper[4620]: I0129 15:55:57.779897 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-catalog-content\") pod \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\" (UID: \"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d\") " Jan 29 15:55:57 crc kubenswrapper[4620]: I0129 15:55:57.781153 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-utilities" (OuterVolumeSpecName: "utilities") pod "a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" (UID: "a0efc705-fe12-4ebd-89c3-2f3d7b47f62d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:55:57 crc kubenswrapper[4620]: I0129 15:55:57.788903 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-kube-api-access-75vfg" (OuterVolumeSpecName: "kube-api-access-75vfg") pod "a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" (UID: "a0efc705-fe12-4ebd-89c3-2f3d7b47f62d"). InnerVolumeSpecName "kube-api-access-75vfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:57 crc kubenswrapper[4620]: I0129 15:55:57.881606 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:57 crc kubenswrapper[4620]: I0129 15:55:57.881894 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75vfg\" (UniqueName: \"kubernetes.io/projected/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-kube-api-access-75vfg\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.135458 4620 generic.go:334] "Generic (PLEG): container finished" podID="a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" containerID="651fd09f026d564275e95f2790b56cb5545ad8e8d03f8a1b2e5efab8d1b3c3fc" exitCode=0 Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.135504 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x6pp" event={"ID":"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d","Type":"ContainerDied","Data":"651fd09f026d564275e95f2790b56cb5545ad8e8d03f8a1b2e5efab8d1b3c3fc"} Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.135531 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4x6pp" event={"ID":"a0efc705-fe12-4ebd-89c3-2f3d7b47f62d","Type":"ContainerDied","Data":"7e4328817023af8d917ca36f38ba0dcc2ca1bc89ecdf80ec9c79703c67801185"} Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.135552 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4x6pp" Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.135557 4620 scope.go:117] "RemoveContainer" containerID="651fd09f026d564275e95f2790b56cb5545ad8e8d03f8a1b2e5efab8d1b3c3fc" Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.153129 4620 scope.go:117] "RemoveContainer" containerID="d1f0bd48cac2dcaf73b18550ad749e8b382b0067e248a9964d9c08d2c5558ea2" Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.183275 4620 scope.go:117] "RemoveContainer" containerID="610e1a4053ee47e63e1a456a6c295b2967458f1ee77b435dc182f02d66e79ac2" Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.212340 4620 scope.go:117] "RemoveContainer" containerID="651fd09f026d564275e95f2790b56cb5545ad8e8d03f8a1b2e5efab8d1b3c3fc" Jan 29 15:55:58 crc kubenswrapper[4620]: E0129 15:55:58.212966 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"651fd09f026d564275e95f2790b56cb5545ad8e8d03f8a1b2e5efab8d1b3c3fc\": container with ID starting with 651fd09f026d564275e95f2790b56cb5545ad8e8d03f8a1b2e5efab8d1b3c3fc not found: ID does not exist" containerID="651fd09f026d564275e95f2790b56cb5545ad8e8d03f8a1b2e5efab8d1b3c3fc" Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.213023 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"651fd09f026d564275e95f2790b56cb5545ad8e8d03f8a1b2e5efab8d1b3c3fc"} err="failed to get container status \"651fd09f026d564275e95f2790b56cb5545ad8e8d03f8a1b2e5efab8d1b3c3fc\": rpc error: code = NotFound desc = could not find container \"651fd09f026d564275e95f2790b56cb5545ad8e8d03f8a1b2e5efab8d1b3c3fc\": container with ID starting with 651fd09f026d564275e95f2790b56cb5545ad8e8d03f8a1b2e5efab8d1b3c3fc not found: ID does not exist" Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.213059 4620 scope.go:117] "RemoveContainer" containerID="d1f0bd48cac2dcaf73b18550ad749e8b382b0067e248a9964d9c08d2c5558ea2" Jan 29 15:55:58 crc kubenswrapper[4620]: E0129 15:55:58.213598 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1f0bd48cac2dcaf73b18550ad749e8b382b0067e248a9964d9c08d2c5558ea2\": container with ID starting with d1f0bd48cac2dcaf73b18550ad749e8b382b0067e248a9964d9c08d2c5558ea2 not found: ID does not exist" containerID="d1f0bd48cac2dcaf73b18550ad749e8b382b0067e248a9964d9c08d2c5558ea2" Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.213624 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f0bd48cac2dcaf73b18550ad749e8b382b0067e248a9964d9c08d2c5558ea2"} err="failed to get container status \"d1f0bd48cac2dcaf73b18550ad749e8b382b0067e248a9964d9c08d2c5558ea2\": rpc error: code = NotFound desc = could not find container \"d1f0bd48cac2dcaf73b18550ad749e8b382b0067e248a9964d9c08d2c5558ea2\": container with ID starting with d1f0bd48cac2dcaf73b18550ad749e8b382b0067e248a9964d9c08d2c5558ea2 not found: ID does not exist" Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.213644 4620 scope.go:117] "RemoveContainer" containerID="610e1a4053ee47e63e1a456a6c295b2967458f1ee77b435dc182f02d66e79ac2" Jan 29 15:55:58 crc kubenswrapper[4620]: E0129 15:55:58.214172 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"610e1a4053ee47e63e1a456a6c295b2967458f1ee77b435dc182f02d66e79ac2\": container with ID starting with 610e1a4053ee47e63e1a456a6c295b2967458f1ee77b435dc182f02d66e79ac2 not found: ID does not exist" containerID="610e1a4053ee47e63e1a456a6c295b2967458f1ee77b435dc182f02d66e79ac2" Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.214210 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"610e1a4053ee47e63e1a456a6c295b2967458f1ee77b435dc182f02d66e79ac2"} err="failed to get container status \"610e1a4053ee47e63e1a456a6c295b2967458f1ee77b435dc182f02d66e79ac2\": rpc error: code = NotFound desc = could not find container \"610e1a4053ee47e63e1a456a6c295b2967458f1ee77b435dc182f02d66e79ac2\": container with ID starting with 610e1a4053ee47e63e1a456a6c295b2967458f1ee77b435dc182f02d66e79ac2 not found: ID does not exist" Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.495436 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" (UID: "a0efc705-fe12-4ebd-89c3-2f3d7b47f62d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.593233 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.764391 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4x6pp"] Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.774872 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4x6pp"] Jan 29 15:55:58 crc kubenswrapper[4620]: I0129 15:55:58.883022 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" path="/var/lib/kubelet/pods/a0efc705-fe12-4ebd-89c3-2f3d7b47f62d/volumes" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.254680 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nl488"] Jan 29 15:56:08 crc kubenswrapper[4620]: E0129 15:56:08.255675 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" containerName="extract-utilities" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.255696 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" containerName="extract-utilities" Jan 29 15:56:08 crc kubenswrapper[4620]: E0129 15:56:08.255728 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" containerName="extract-content" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.255739 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" containerName="extract-content" Jan 29 15:56:08 crc kubenswrapper[4620]: E0129 15:56:08.255780 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" containerName="registry-server" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.255792 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" containerName="registry-server" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.256046 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0efc705-fe12-4ebd-89c3-2f3d7b47f62d" containerName="registry-server" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.257530 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.282241 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nl488"] Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.447645 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm8dx\" (UniqueName: \"kubernetes.io/projected/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-kube-api-access-sm8dx\") pod \"redhat-marketplace-nl488\" (UID: \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\") " pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.447824 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-catalog-content\") pod \"redhat-marketplace-nl488\" (UID: \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\") " pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.447949 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-utilities\") pod \"redhat-marketplace-nl488\" (UID: \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\") " pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.549196 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-utilities\") pod \"redhat-marketplace-nl488\" (UID: \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\") " pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.549259 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm8dx\" (UniqueName: \"kubernetes.io/projected/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-kube-api-access-sm8dx\") pod \"redhat-marketplace-nl488\" (UID: \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\") " pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.549305 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-catalog-content\") pod \"redhat-marketplace-nl488\" (UID: \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\") " pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.549726 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-catalog-content\") pod \"redhat-marketplace-nl488\" (UID: \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\") " pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.549741 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-utilities\") pod \"redhat-marketplace-nl488\" (UID: \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\") " pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.573748 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm8dx\" (UniqueName: \"kubernetes.io/projected/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-kube-api-access-sm8dx\") pod \"redhat-marketplace-nl488\" (UID: \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\") " pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.578373 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:08 crc kubenswrapper[4620]: I0129 15:56:08.999625 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nl488"] Jan 29 15:56:09 crc kubenswrapper[4620]: I0129 15:56:09.233684 4620 generic.go:334] "Generic (PLEG): container finished" podID="25cd3e1a-8be3-4bca-9554-4b016c2b5eff" containerID="18cad31151c6f262e611466f8f4cef4fd6d731b235d65035c344860b956ce834" exitCode=0 Jan 29 15:56:09 crc kubenswrapper[4620]: I0129 15:56:09.233728 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nl488" event={"ID":"25cd3e1a-8be3-4bca-9554-4b016c2b5eff","Type":"ContainerDied","Data":"18cad31151c6f262e611466f8f4cef4fd6d731b235d65035c344860b956ce834"} Jan 29 15:56:09 crc kubenswrapper[4620]: I0129 15:56:09.234053 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nl488" event={"ID":"25cd3e1a-8be3-4bca-9554-4b016c2b5eff","Type":"ContainerStarted","Data":"ada1232a6939f41d9aff26db077888f96e1ca33337f05255a7df92806d920e93"} Jan 29 15:56:10 crc kubenswrapper[4620]: I0129 15:56:10.246185 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nl488" event={"ID":"25cd3e1a-8be3-4bca-9554-4b016c2b5eff","Type":"ContainerStarted","Data":"c97e046c8eec16486f4ec7915d41cf778a82e3e1e41d14e64640f4aaf44249f5"} Jan 29 15:56:11 crc kubenswrapper[4620]: I0129 15:56:11.254607 4620 generic.go:334] "Generic (PLEG): container finished" podID="25cd3e1a-8be3-4bca-9554-4b016c2b5eff" containerID="c97e046c8eec16486f4ec7915d41cf778a82e3e1e41d14e64640f4aaf44249f5" exitCode=0 Jan 29 15:56:11 crc kubenswrapper[4620]: I0129 15:56:11.254660 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nl488" event={"ID":"25cd3e1a-8be3-4bca-9554-4b016c2b5eff","Type":"ContainerDied","Data":"c97e046c8eec16486f4ec7915d41cf778a82e3e1e41d14e64640f4aaf44249f5"} Jan 29 15:56:12 crc kubenswrapper[4620]: I0129 15:56:12.264860 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nl488" event={"ID":"25cd3e1a-8be3-4bca-9554-4b016c2b5eff","Type":"ContainerStarted","Data":"69b1c3346afd762e64193c32f0a223ff292d49db658691fcf70b0b6fd9bf31e5"} Jan 29 15:56:12 crc kubenswrapper[4620]: I0129 15:56:12.286606 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nl488" podStartSLOduration=1.8598421109999999 podStartE2EDuration="4.286589672s" podCreationTimestamp="2026-01-29 15:56:08 +0000 UTC" firstStartedPulling="2026-01-29 15:56:09.235555094 +0000 UTC m=+3309.848382739" lastFinishedPulling="2026-01-29 15:56:11.662302655 +0000 UTC m=+3312.275130300" observedRunningTime="2026-01-29 15:56:12.285187649 +0000 UTC m=+3312.898015324" watchObservedRunningTime="2026-01-29 15:56:12.286589672 +0000 UTC m=+3312.899417327" Jan 29 15:56:18 crc kubenswrapper[4620]: I0129 15:56:18.579726 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:18 crc kubenswrapper[4620]: I0129 15:56:18.580286 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:18 crc kubenswrapper[4620]: I0129 15:56:18.619264 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:19 crc kubenswrapper[4620]: I0129 15:56:19.364342 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:19 crc kubenswrapper[4620]: I0129 15:56:19.853785 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nl488"] Jan 29 15:56:21 crc kubenswrapper[4620]: I0129 15:56:21.325721 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nl488" podUID="25cd3e1a-8be3-4bca-9554-4b016c2b5eff" containerName="registry-server" containerID="cri-o://69b1c3346afd762e64193c32f0a223ff292d49db658691fcf70b0b6fd9bf31e5" gracePeriod=2 Jan 29 15:56:21 crc kubenswrapper[4620]: I0129 15:56:21.758407 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:21 crc kubenswrapper[4620]: I0129 15:56:21.953912 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-utilities\") pod \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\" (UID: \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\") " Jan 29 15:56:21 crc kubenswrapper[4620]: I0129 15:56:21.954018 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm8dx\" (UniqueName: \"kubernetes.io/projected/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-kube-api-access-sm8dx\") pod \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\" (UID: \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\") " Jan 29 15:56:21 crc kubenswrapper[4620]: I0129 15:56:21.954512 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-catalog-content\") pod \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\" (UID: \"25cd3e1a-8be3-4bca-9554-4b016c2b5eff\") " Jan 29 15:56:21 crc kubenswrapper[4620]: I0129 15:56:21.955104 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-utilities" (OuterVolumeSpecName: "utilities") pod "25cd3e1a-8be3-4bca-9554-4b016c2b5eff" (UID: "25cd3e1a-8be3-4bca-9554-4b016c2b5eff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:56:21 crc kubenswrapper[4620]: I0129 15:56:21.958645 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-kube-api-access-sm8dx" (OuterVolumeSpecName: "kube-api-access-sm8dx") pod "25cd3e1a-8be3-4bca-9554-4b016c2b5eff" (UID: "25cd3e1a-8be3-4bca-9554-4b016c2b5eff"). InnerVolumeSpecName "kube-api-access-sm8dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:56:21 crc kubenswrapper[4620]: I0129 15:56:21.976403 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25cd3e1a-8be3-4bca-9554-4b016c2b5eff" (UID: "25cd3e1a-8be3-4bca-9554-4b016c2b5eff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.055706 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.055925 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm8dx\" (UniqueName: \"kubernetes.io/projected/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-kube-api-access-sm8dx\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.056019 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25cd3e1a-8be3-4bca-9554-4b016c2b5eff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.335621 4620 generic.go:334] "Generic (PLEG): container finished" podID="25cd3e1a-8be3-4bca-9554-4b016c2b5eff" containerID="69b1c3346afd762e64193c32f0a223ff292d49db658691fcf70b0b6fd9bf31e5" exitCode=0 Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.335692 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nl488" event={"ID":"25cd3e1a-8be3-4bca-9554-4b016c2b5eff","Type":"ContainerDied","Data":"69b1c3346afd762e64193c32f0a223ff292d49db658691fcf70b0b6fd9bf31e5"} Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.335716 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nl488" event={"ID":"25cd3e1a-8be3-4bca-9554-4b016c2b5eff","Type":"ContainerDied","Data":"ada1232a6939f41d9aff26db077888f96e1ca33337f05255a7df92806d920e93"} Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.335978 4620 scope.go:117] "RemoveContainer" containerID="69b1c3346afd762e64193c32f0a223ff292d49db658691fcf70b0b6fd9bf31e5" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.336167 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nl488" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.370225 4620 scope.go:117] "RemoveContainer" containerID="c97e046c8eec16486f4ec7915d41cf778a82e3e1e41d14e64640f4aaf44249f5" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.374030 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nl488"] Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.379615 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nl488"] Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.398412 4620 scope.go:117] "RemoveContainer" containerID="18cad31151c6f262e611466f8f4cef4fd6d731b235d65035c344860b956ce834" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.418033 4620 scope.go:117] "RemoveContainer" containerID="69b1c3346afd762e64193c32f0a223ff292d49db658691fcf70b0b6fd9bf31e5" Jan 29 15:56:22 crc kubenswrapper[4620]: E0129 15:56:22.418448 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69b1c3346afd762e64193c32f0a223ff292d49db658691fcf70b0b6fd9bf31e5\": container with ID starting with 69b1c3346afd762e64193c32f0a223ff292d49db658691fcf70b0b6fd9bf31e5 not found: ID does not exist" containerID="69b1c3346afd762e64193c32f0a223ff292d49db658691fcf70b0b6fd9bf31e5" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.418522 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b1c3346afd762e64193c32f0a223ff292d49db658691fcf70b0b6fd9bf31e5"} err="failed to get container status \"69b1c3346afd762e64193c32f0a223ff292d49db658691fcf70b0b6fd9bf31e5\": rpc error: code = NotFound desc = could not find container \"69b1c3346afd762e64193c32f0a223ff292d49db658691fcf70b0b6fd9bf31e5\": container with ID starting with 69b1c3346afd762e64193c32f0a223ff292d49db658691fcf70b0b6fd9bf31e5 not found: ID does not exist" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.418571 4620 scope.go:117] "RemoveContainer" containerID="c97e046c8eec16486f4ec7915d41cf778a82e3e1e41d14e64640f4aaf44249f5" Jan 29 15:56:22 crc kubenswrapper[4620]: E0129 15:56:22.418896 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c97e046c8eec16486f4ec7915d41cf778a82e3e1e41d14e64640f4aaf44249f5\": container with ID starting with c97e046c8eec16486f4ec7915d41cf778a82e3e1e41d14e64640f4aaf44249f5 not found: ID does not exist" containerID="c97e046c8eec16486f4ec7915d41cf778a82e3e1e41d14e64640f4aaf44249f5" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.418923 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97e046c8eec16486f4ec7915d41cf778a82e3e1e41d14e64640f4aaf44249f5"} err="failed to get container status \"c97e046c8eec16486f4ec7915d41cf778a82e3e1e41d14e64640f4aaf44249f5\": rpc error: code = NotFound desc = could not find container \"c97e046c8eec16486f4ec7915d41cf778a82e3e1e41d14e64640f4aaf44249f5\": container with ID starting with c97e046c8eec16486f4ec7915d41cf778a82e3e1e41d14e64640f4aaf44249f5 not found: ID does not exist" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.418941 4620 scope.go:117] "RemoveContainer" containerID="18cad31151c6f262e611466f8f4cef4fd6d731b235d65035c344860b956ce834" Jan 29 15:56:22 crc kubenswrapper[4620]: E0129 15:56:22.419237 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18cad31151c6f262e611466f8f4cef4fd6d731b235d65035c344860b956ce834\": container with ID starting with 18cad31151c6f262e611466f8f4cef4fd6d731b235d65035c344860b956ce834 not found: ID does not exist" containerID="18cad31151c6f262e611466f8f4cef4fd6d731b235d65035c344860b956ce834" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.419265 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18cad31151c6f262e611466f8f4cef4fd6d731b235d65035c344860b956ce834"} err="failed to get container status \"18cad31151c6f262e611466f8f4cef4fd6d731b235d65035c344860b956ce834\": rpc error: code = NotFound desc = could not find container \"18cad31151c6f262e611466f8f4cef4fd6d731b235d65035c344860b956ce834\": container with ID starting with 18cad31151c6f262e611466f8f4cef4fd6d731b235d65035c344860b956ce834 not found: ID does not exist" Jan 29 15:56:22 crc kubenswrapper[4620]: I0129 15:56:22.882956 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25cd3e1a-8be3-4bca-9554-4b016c2b5eff" path="/var/lib/kubelet/pods/25cd3e1a-8be3-4bca-9554-4b016c2b5eff/volumes" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.750946 4620 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dlmfq"] Jan 29 15:57:06 crc kubenswrapper[4620]: E0129 15:57:06.752002 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25cd3e1a-8be3-4bca-9554-4b016c2b5eff" containerName="extract-content" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.752020 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="25cd3e1a-8be3-4bca-9554-4b016c2b5eff" containerName="extract-content" Jan 29 15:57:06 crc kubenswrapper[4620]: E0129 15:57:06.752038 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25cd3e1a-8be3-4bca-9554-4b016c2b5eff" containerName="registry-server" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.752046 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="25cd3e1a-8be3-4bca-9554-4b016c2b5eff" containerName="registry-server" Jan 29 15:57:06 crc kubenswrapper[4620]: E0129 15:57:06.752089 4620 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25cd3e1a-8be3-4bca-9554-4b016c2b5eff" containerName="extract-utilities" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.752097 4620 state_mem.go:107] "Deleted CPUSet assignment" podUID="25cd3e1a-8be3-4bca-9554-4b016c2b5eff" containerName="extract-utilities" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.752296 4620 memory_manager.go:354] "RemoveStaleState removing state" podUID="25cd3e1a-8be3-4bca-9554-4b016c2b5eff" containerName="registry-server" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.753591 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.762278 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dlmfq"] Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.789265 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8842d1-aea3-453f-8c23-b73824e382a9-catalog-content\") pod \"redhat-operators-dlmfq\" (UID: \"cb8842d1-aea3-453f-8c23-b73824e382a9\") " pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.789316 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdcfq\" (UniqueName: \"kubernetes.io/projected/cb8842d1-aea3-453f-8c23-b73824e382a9-kube-api-access-wdcfq\") pod \"redhat-operators-dlmfq\" (UID: \"cb8842d1-aea3-453f-8c23-b73824e382a9\") " pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.789362 4620 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8842d1-aea3-453f-8c23-b73824e382a9-utilities\") pod \"redhat-operators-dlmfq\" (UID: \"cb8842d1-aea3-453f-8c23-b73824e382a9\") " pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.890381 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8842d1-aea3-453f-8c23-b73824e382a9-utilities\") pod \"redhat-operators-dlmfq\" (UID: \"cb8842d1-aea3-453f-8c23-b73824e382a9\") " pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.890576 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8842d1-aea3-453f-8c23-b73824e382a9-catalog-content\") pod \"redhat-operators-dlmfq\" (UID: \"cb8842d1-aea3-453f-8c23-b73824e382a9\") " pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.890622 4620 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdcfq\" (UniqueName: \"kubernetes.io/projected/cb8842d1-aea3-453f-8c23-b73824e382a9-kube-api-access-wdcfq\") pod \"redhat-operators-dlmfq\" (UID: \"cb8842d1-aea3-453f-8c23-b73824e382a9\") " pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.891425 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8842d1-aea3-453f-8c23-b73824e382a9-utilities\") pod \"redhat-operators-dlmfq\" (UID: \"cb8842d1-aea3-453f-8c23-b73824e382a9\") " pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.892245 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8842d1-aea3-453f-8c23-b73824e382a9-catalog-content\") pod \"redhat-operators-dlmfq\" (UID: \"cb8842d1-aea3-453f-8c23-b73824e382a9\") " pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:06 crc kubenswrapper[4620]: I0129 15:57:06.918971 4620 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdcfq\" (UniqueName: \"kubernetes.io/projected/cb8842d1-aea3-453f-8c23-b73824e382a9-kube-api-access-wdcfq\") pod \"redhat-operators-dlmfq\" (UID: \"cb8842d1-aea3-453f-8c23-b73824e382a9\") " pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:07 crc kubenswrapper[4620]: I0129 15:57:07.070031 4620 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:07 crc kubenswrapper[4620]: I0129 15:57:07.542993 4620 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dlmfq"] Jan 29 15:57:07 crc kubenswrapper[4620]: I0129 15:57:07.631682 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlmfq" event={"ID":"cb8842d1-aea3-453f-8c23-b73824e382a9","Type":"ContainerStarted","Data":"e680c3270bc71eaebc348c9f7b756c185edcaf5f4dc97792e93fc1c30cdfc3b2"} Jan 29 15:57:08 crc kubenswrapper[4620]: I0129 15:57:08.639142 4620 generic.go:334] "Generic (PLEG): container finished" podID="cb8842d1-aea3-453f-8c23-b73824e382a9" containerID="e0927602f37029f9edbe541a2e85c0a2c36c00bcef5d0035445439fb84e31752" exitCode=0 Jan 29 15:57:08 crc kubenswrapper[4620]: I0129 15:57:08.639353 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlmfq" event={"ID":"cb8842d1-aea3-453f-8c23-b73824e382a9","Type":"ContainerDied","Data":"e0927602f37029f9edbe541a2e85c0a2c36c00bcef5d0035445439fb84e31752"} Jan 29 15:57:10 crc kubenswrapper[4620]: I0129 15:57:10.655013 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlmfq" event={"ID":"cb8842d1-aea3-453f-8c23-b73824e382a9","Type":"ContainerStarted","Data":"13f62a2c88c18942d5cca664ba9e26cc36f5d3d4c69d3dcb75fa54af9f8972fc"} Jan 29 15:57:11 crc kubenswrapper[4620]: I0129 15:57:11.664683 4620 generic.go:334] "Generic (PLEG): container finished" podID="cb8842d1-aea3-453f-8c23-b73824e382a9" containerID="13f62a2c88c18942d5cca664ba9e26cc36f5d3d4c69d3dcb75fa54af9f8972fc" exitCode=0 Jan 29 15:57:11 crc kubenswrapper[4620]: I0129 15:57:11.664796 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlmfq" event={"ID":"cb8842d1-aea3-453f-8c23-b73824e382a9","Type":"ContainerDied","Data":"13f62a2c88c18942d5cca664ba9e26cc36f5d3d4c69d3dcb75fa54af9f8972fc"} Jan 29 15:57:13 crc kubenswrapper[4620]: I0129 15:57:13.684021 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlmfq" event={"ID":"cb8842d1-aea3-453f-8c23-b73824e382a9","Type":"ContainerStarted","Data":"b424fcda8cc6d237cb7ab9cec4aaa0468e811af7645c0f6bdfd66bea661e194b"} Jan 29 15:57:13 crc kubenswrapper[4620]: I0129 15:57:13.713294 4620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dlmfq" podStartSLOduration=3.709592482 podStartE2EDuration="7.713267164s" podCreationTimestamp="2026-01-29 15:57:06 +0000 UTC" firstStartedPulling="2026-01-29 15:57:08.640870429 +0000 UTC m=+3369.253698074" lastFinishedPulling="2026-01-29 15:57:12.644545111 +0000 UTC m=+3373.257372756" observedRunningTime="2026-01-29 15:57:13.707450032 +0000 UTC m=+3374.320277687" watchObservedRunningTime="2026-01-29 15:57:13.713267164 +0000 UTC m=+3374.326094809" Jan 29 15:57:17 crc kubenswrapper[4620]: I0129 15:57:17.070680 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:17 crc kubenswrapper[4620]: I0129 15:57:17.071029 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:18 crc kubenswrapper[4620]: I0129 15:57:18.110368 4620 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dlmfq" podUID="cb8842d1-aea3-453f-8c23-b73824e382a9" containerName="registry-server" probeResult="failure" output=< Jan 29 15:57:18 crc kubenswrapper[4620]: timeout: failed to connect service ":50051" within 1s Jan 29 15:57:18 crc kubenswrapper[4620]: > Jan 29 15:57:27 crc kubenswrapper[4620]: I0129 15:57:27.123967 4620 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:27 crc kubenswrapper[4620]: I0129 15:57:27.181274 4620 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:27 crc kubenswrapper[4620]: I0129 15:57:27.360119 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dlmfq"] Jan 29 15:57:28 crc kubenswrapper[4620]: I0129 15:57:28.799326 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dlmfq" podUID="cb8842d1-aea3-453f-8c23-b73824e382a9" containerName="registry-server" containerID="cri-o://b424fcda8cc6d237cb7ab9cec4aaa0468e811af7645c0f6bdfd66bea661e194b" gracePeriod=2 Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.240715 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.327295 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8842d1-aea3-453f-8c23-b73824e382a9-utilities\") pod \"cb8842d1-aea3-453f-8c23-b73824e382a9\" (UID: \"cb8842d1-aea3-453f-8c23-b73824e382a9\") " Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.327350 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdcfq\" (UniqueName: \"kubernetes.io/projected/cb8842d1-aea3-453f-8c23-b73824e382a9-kube-api-access-wdcfq\") pod \"cb8842d1-aea3-453f-8c23-b73824e382a9\" (UID: \"cb8842d1-aea3-453f-8c23-b73824e382a9\") " Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.327399 4620 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8842d1-aea3-453f-8c23-b73824e382a9-catalog-content\") pod \"cb8842d1-aea3-453f-8c23-b73824e382a9\" (UID: \"cb8842d1-aea3-453f-8c23-b73824e382a9\") " Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.328147 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb8842d1-aea3-453f-8c23-b73824e382a9-utilities" (OuterVolumeSpecName: "utilities") pod "cb8842d1-aea3-453f-8c23-b73824e382a9" (UID: "cb8842d1-aea3-453f-8c23-b73824e382a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.329461 4620 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8842d1-aea3-453f-8c23-b73824e382a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.337490 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8842d1-aea3-453f-8c23-b73824e382a9-kube-api-access-wdcfq" (OuterVolumeSpecName: "kube-api-access-wdcfq") pod "cb8842d1-aea3-453f-8c23-b73824e382a9" (UID: "cb8842d1-aea3-453f-8c23-b73824e382a9"). InnerVolumeSpecName "kube-api-access-wdcfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.430604 4620 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdcfq\" (UniqueName: \"kubernetes.io/projected/cb8842d1-aea3-453f-8c23-b73824e382a9-kube-api-access-wdcfq\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.452882 4620 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb8842d1-aea3-453f-8c23-b73824e382a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb8842d1-aea3-453f-8c23-b73824e382a9" (UID: "cb8842d1-aea3-453f-8c23-b73824e382a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.531929 4620 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8842d1-aea3-453f-8c23-b73824e382a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.807902 4620 generic.go:334] "Generic (PLEG): container finished" podID="cb8842d1-aea3-453f-8c23-b73824e382a9" containerID="b424fcda8cc6d237cb7ab9cec4aaa0468e811af7645c0f6bdfd66bea661e194b" exitCode=0 Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.807945 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlmfq" event={"ID":"cb8842d1-aea3-453f-8c23-b73824e382a9","Type":"ContainerDied","Data":"b424fcda8cc6d237cb7ab9cec4aaa0468e811af7645c0f6bdfd66bea661e194b"} Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.807963 4620 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlmfq" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.807997 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlmfq" event={"ID":"cb8842d1-aea3-453f-8c23-b73824e382a9","Type":"ContainerDied","Data":"e680c3270bc71eaebc348c9f7b756c185edcaf5f4dc97792e93fc1c30cdfc3b2"} Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.808022 4620 scope.go:117] "RemoveContainer" containerID="b424fcda8cc6d237cb7ab9cec4aaa0468e811af7645c0f6bdfd66bea661e194b" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.829691 4620 scope.go:117] "RemoveContainer" containerID="13f62a2c88c18942d5cca664ba9e26cc36f5d3d4c69d3dcb75fa54af9f8972fc" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.843887 4620 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dlmfq"] Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.848391 4620 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dlmfq"] Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.853714 4620 scope.go:117] "RemoveContainer" containerID="e0927602f37029f9edbe541a2e85c0a2c36c00bcef5d0035445439fb84e31752" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.876158 4620 scope.go:117] "RemoveContainer" containerID="b424fcda8cc6d237cb7ab9cec4aaa0468e811af7645c0f6bdfd66bea661e194b" Jan 29 15:57:29 crc kubenswrapper[4620]: E0129 15:57:29.876523 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b424fcda8cc6d237cb7ab9cec4aaa0468e811af7645c0f6bdfd66bea661e194b\": container with ID starting with b424fcda8cc6d237cb7ab9cec4aaa0468e811af7645c0f6bdfd66bea661e194b not found: ID does not exist" containerID="b424fcda8cc6d237cb7ab9cec4aaa0468e811af7645c0f6bdfd66bea661e194b" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.876553 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b424fcda8cc6d237cb7ab9cec4aaa0468e811af7645c0f6bdfd66bea661e194b"} err="failed to get container status \"b424fcda8cc6d237cb7ab9cec4aaa0468e811af7645c0f6bdfd66bea661e194b\": rpc error: code = NotFound desc = could not find container \"b424fcda8cc6d237cb7ab9cec4aaa0468e811af7645c0f6bdfd66bea661e194b\": container with ID starting with b424fcda8cc6d237cb7ab9cec4aaa0468e811af7645c0f6bdfd66bea661e194b not found: ID does not exist" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.876572 4620 scope.go:117] "RemoveContainer" containerID="13f62a2c88c18942d5cca664ba9e26cc36f5d3d4c69d3dcb75fa54af9f8972fc" Jan 29 15:57:29 crc kubenswrapper[4620]: E0129 15:57:29.876861 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13f62a2c88c18942d5cca664ba9e26cc36f5d3d4c69d3dcb75fa54af9f8972fc\": container with ID starting with 13f62a2c88c18942d5cca664ba9e26cc36f5d3d4c69d3dcb75fa54af9f8972fc not found: ID does not exist" containerID="13f62a2c88c18942d5cca664ba9e26cc36f5d3d4c69d3dcb75fa54af9f8972fc" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.876882 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13f62a2c88c18942d5cca664ba9e26cc36f5d3d4c69d3dcb75fa54af9f8972fc"} err="failed to get container status \"13f62a2c88c18942d5cca664ba9e26cc36f5d3d4c69d3dcb75fa54af9f8972fc\": rpc error: code = NotFound desc = could not find container \"13f62a2c88c18942d5cca664ba9e26cc36f5d3d4c69d3dcb75fa54af9f8972fc\": container with ID starting with 13f62a2c88c18942d5cca664ba9e26cc36f5d3d4c69d3dcb75fa54af9f8972fc not found: ID does not exist" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.876895 4620 scope.go:117] "RemoveContainer" containerID="e0927602f37029f9edbe541a2e85c0a2c36c00bcef5d0035445439fb84e31752" Jan 29 15:57:29 crc kubenswrapper[4620]: E0129 15:57:29.877250 4620 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0927602f37029f9edbe541a2e85c0a2c36c00bcef5d0035445439fb84e31752\": container with ID starting with e0927602f37029f9edbe541a2e85c0a2c36c00bcef5d0035445439fb84e31752 not found: ID does not exist" containerID="e0927602f37029f9edbe541a2e85c0a2c36c00bcef5d0035445439fb84e31752" Jan 29 15:57:29 crc kubenswrapper[4620]: I0129 15:57:29.877271 4620 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0927602f37029f9edbe541a2e85c0a2c36c00bcef5d0035445439fb84e31752"} err="failed to get container status \"e0927602f37029f9edbe541a2e85c0a2c36c00bcef5d0035445439fb84e31752\": rpc error: code = NotFound desc = could not find container \"e0927602f37029f9edbe541a2e85c0a2c36c00bcef5d0035445439fb84e31752\": container with ID starting with e0927602f37029f9edbe541a2e85c0a2c36c00bcef5d0035445439fb84e31752 not found: ID does not exist" Jan 29 15:57:30 crc kubenswrapper[4620]: I0129 15:57:30.887454 4620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb8842d1-aea3-453f-8c23-b73824e382a9" path="/var/lib/kubelet/pods/cb8842d1-aea3-453f-8c23-b73824e382a9/volumes" Jan 29 15:57:34 crc kubenswrapper[4620]: I0129 15:57:34.111525 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:57:34 crc kubenswrapper[4620]: I0129 15:57:34.111991 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:58:04 crc kubenswrapper[4620]: I0129 15:58:04.111135 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:58:04 crc kubenswrapper[4620]: I0129 15:58:04.111915 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:58:34 crc kubenswrapper[4620]: I0129 15:58:34.111291 4620 patch_prober.go:28] interesting pod/machine-config-daemon-7469t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:58:34 crc kubenswrapper[4620]: I0129 15:58:34.111881 4620 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:58:34 crc kubenswrapper[4620]: I0129 15:58:34.111933 4620 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7469t" Jan 29 15:58:34 crc kubenswrapper[4620]: I0129 15:58:34.112650 4620 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e4052546046dd4c8fe27bd42898c0eb6be7f95c18a5457f2c808d59d35bcae2"} pod="openshift-machine-config-operator/machine-config-daemon-7469t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:58:34 crc kubenswrapper[4620]: I0129 15:58:34.112718 4620 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7469t" podUID="a76cce43-3d01-4158-b23a-e21fd5927792" containerName="machine-config-daemon" containerID="cri-o://4e4052546046dd4c8fe27bd42898c0eb6be7f95c18a5457f2c808d59d35bcae2" gracePeriod=600 Jan 29 15:58:34 crc kubenswrapper[4620]: I0129 15:58:34.283746 4620 generic.go:334] "Generic (PLEG): container finished" podID="a76cce43-3d01-4158-b23a-e21fd5927792" containerID="4e4052546046dd4c8fe27bd42898c0eb6be7f95c18a5457f2c808d59d35bcae2" exitCode=0 Jan 29 15:58:34 crc kubenswrapper[4620]: I0129 15:58:34.283781 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerDied","Data":"4e4052546046dd4c8fe27bd42898c0eb6be7f95c18a5457f2c808d59d35bcae2"} Jan 29 15:58:34 crc kubenswrapper[4620]: I0129 15:58:34.283831 4620 scope.go:117] "RemoveContainer" containerID="6b7aa1d4ebe744b3f8e708f5f5866d2978733d6952b226752911613cb5fc5699" Jan 29 15:58:35 crc kubenswrapper[4620]: I0129 15:58:35.302902 4620 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7469t" event={"ID":"a76cce43-3d01-4158-b23a-e21fd5927792","Type":"ContainerStarted","Data":"dea8238445274511268bef8190c7118d2e47bb7d066f525cad3fd4c9ed2dbc2b"} var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136701721024450 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136701722017366 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136672452016520 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136672452015470 5ustar corecore